include_related
query parameter to theget_job
method
- New
search
method in the metadata client for searching across dbt resources - Retry logic to the common session object for the following status codes: 429, 500, 502, 503, 504
public_models
method on themetadata
property - now allows for argument-based filtering.
- Rudderstack tracking code
- Add payload parameter to create_managed_repository
requiresMetricTime
field to GetMetrics semantic layer query
- The list_environments method and CLI invocations
- Versioning issue
- New command line groups (e.g. instead of
dbtc list-accounts
, you would usedbtc accounts list
). Older methods are still around but will be deprecated in future versions. - New discovery API convenience methods to retrieve performance, recommendations, and other information
- Semantic layer client. This can be accessed with the
sl
property on thedbtCloudClient
class (e.g.client.sl.query
)
- All of the methods in the
_MetadataClient
except forquery
. The Discovery API no longer allows a user to specify every single field recursively, which is what thesgqlc
package would do.
- An optional keyword argument
use_beta_endpoint
to thedbtCloudClient
class. This will default toTrue
, which means that the Discovery API will use the beta endpoint at https://metadata./beta/graphql instead of https://metadata./graphql. This contains both the stable API resources (environment, models, tests, etc.) but also contains things for performance, recommendations, and lineage. - Ability to automatically paginate requests for the Discovery API. If pagination is required/desired, ensure that your query is properly created with an
$after
variable and all of the fields within thepageInfo
field.
- Loosen restrictions on Pydantic - ">=2.0,<3.0"
retries
argument to thetrigger_job
method. This will allow you to retry a jobretries
amount of times until completion, which is defined assuccess
orcancelled
.
trigger_job_from_failure
method to point at the newrerun
endpoint. Logic is no longer necessary internally.
output
flag can now be used to pipe output into files instead of stdout
- The
-o
flag is no longer used for order-by when using that argument via the CLI; it is now used as an alternative for output (--output
or-o
)
- Typer version to
0.9.0
- Remove read-only field
job_type
from job payload before cloning job
- Method used in the
update_environment_variables
method call fromPOST
toPUT
- Methods to update and list environment variables
trigger_job_from_failure
method encountering anIndexError
when called for the first run of the jobassign_user_to_group
method now accepts aproject_id
argumentdelete_user_group
method now accepts apayload
argument
- How the base URL was constructed as it was not properly accounting for other regions, single tenant instances properly
- Most recent updates for the Metadata API schema
- List, test, create, get, update, and delete methods for webhooks
- Support for pydantic models used for validation logic when creating Webhooks - eventually will add support for other create methods
- Decorator that sets a private property on the
_Client
class,_called_from
, that helps understand when methods are called from another method.
list_users
is now using a v3 endpoint
- All v4 methods were removed as dbt Cloud will begin to deprecate their use soon
- A
max_run_slots
keyword argument to thetrigger_autoscaling_ci_job
method. This will allow a user to limit the amount of run slots that can be occupied by CI jobs. The default value will be None, which will ensure that the normal behavior of this method remains intact (e.g. it will clone the CI job until the number of run slots configured for the account is reached).
- An additional read-only field from a job definition needed to be removed prior to creating the cloned job. 500 errors were occuring because of this.
version
argument to the CLI. Invoke withdbtc --version
.- Ability to track what methods are being used. Important to note that you can opt out of this by passing
do_not_track=True
to thedbtCloudClient
class. Additionally, nothing identifiable, like IDs, will be tracked - simply a way to understand what methods of the package are being used.
- Bad type argument for
poll_interval
in the CLI method fortrigger-job-from-failure
- Additional keyword arguments to filter the
list_projects
endpoint by -project_id
,state
,offset
, andlimit
. Theoffset
will be useful if an account has greater than 100 (the max projects that can be returned) projects. - Additional keyword arguments to filter the
list_jobs
endpoint by -environment_id
,state
,offset
, andlimit
. Important to note that theproject_id
can either be a single project_id integer or a list of project_ids - Convenience methods to return the most recent run,
get_most_recent_run
, and the recent run artifact,get_most_recent_run_artifact
. - Additional keyword arguments to filter the
list_environments
endpoint by -dbt_version
,name
,type
,state
,offset
, andlimit
. Important to note that theproject_id
can either be a single project_id integer or a list of project_ids. fields
argument to the methods on themetadata
property. This allows you to limit the data returned from the Metadata API while still not having to write any GraphQL!query
method on themetadata
property. This allows you to write a GraphQL query and supply variables
- A bug in
get_project_by_name
- A bug in the CLI related to any methods that accept the
include_related
argument. This is now valid syntax'["debug_logs", "run_steps"]'
.
- Autoscaling CI jobs were being improperly cloned when adding a commit to the same PR.
- Finding in progress PR runs using the PR ID within the payload
- In progress runs weren't properly being cancelled within the
trigger_autoscaling_ci_job
method. In addiiton to checking if the job has an in progress run, this method will now also check if there is a run in a "running" state for the PR ID given in the payload. This will ensure that a single PR can only have one run occuring at a given time (this wasn't the case in 0.3.0).
trigger_autoscaling_ci_job
method to thecloud
property of thedbtCloudClient
class.
- The restart from failure functionality has now been moved to it's own separate method,
trigger_job_from_failure
. You'll still be able to trigger a job using thetrigger_job
method.
- Non json artifacts are now able to be retrieved from
get_run_artifact
- Bad url configuration for
create_job
method
- Global CLI args
--warn-error
and--use-experimental-parser
were not being considered. If they were present in the command, the modified command would have been invalid. These are now included within themodified_command
if present in the initial step's command.
--full-refresh
flag is now being pulled in themodified_command
if present in the initial step's command.
- Checking for an invalid result "skip" instead of "skipped" when identifying nodes that need to be reran.
- The ability to restart a job from failure. The
trigger_job
method now accepts an argumentrestart_from_failure
(defaultFalse
) that will determine whether or not the last run attempt for a job was unsuccessful - in the event it was, it will parse the steps within that job and find the nodes that it needs to rerun as well as any steps that were skipped entirely. - Additional commands to the
trigger_job
method:should_poll
- Indicate whether or not the method should poll for completion (defaultTrue
)poll_interval
- How long in between polling requests (default 10 seconds)restart_from_failure
- Described abovetrigger_on_failure_only
- Only relevant when settingrestart_from_failure
toTrue
. This has the effect, when set toTrue
, of only triggering the job when the prior invocation was not successful. Otherwise, the function will exit prior to triggering the job (defaultFalse
)
- Logging to stderr when using the
trigger_job
method (internally using therich
package that comes when installingTyper
) - Multiple tests for the
restart_from_failure
functionality
- The
trigger_job_and_poll
method within thecloud
property of thedbtCloudClient
class. The polling functionality is now rolled up into the singletrigger_job
method with the argumentshould_poll
(default isTrue
)
get_model_by_environment
to themetadata
propertymeta
field is now available when you query columns
- The metadata methods are now available via the CLI
- A
status
arg can now be used in thelist_runs
method on thecloud
property
- The
_dbt_cloud_request
private method, which is used in the CLI, now only usestyper.echo
to return data from a request.
- The
trigger_job_and_poll
method now returns theRun
, represented as adict
. It will no longer raise an exception if the result of the run is cancelled or error.
- The
cloud
property on thedbtCloudClient
class now contains v3 endpoints
dbtCloudClient
class is the main interface to the dbt Cloud APIs. Thecloud
property contains methods that allow for programmatic access to different resources within dbt Cloud (e.g.dbtCloudClient().cloud.list_accounts()
). Themetadata
property contains methods that allow for retrieval of metadata related to a dbt Cloud job run (e.g.dbtCloudClient().metadata.get_models(job_id, run_id)
).dbtc
is a command line interface to the methods on thedbtCloudClient
class (e.g.dbtc list-accounts
)