You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Prior to version 0.12.0, the terraform provider handled columns that already existed in a dataset by importing them under the covers, and then operating on them. In 0.12.0 the processed changed so that the provider will now receive a 409 error from the Honeycomb API and fail out due to that error.
Issue caused by change
This has resulted in impact to downstream honeycomb modules, as well as the Honeycomb tools that were developed to help customers migrate from Honeycomb Classic to Honeycomb Environments & Services. We're seeing breakage if these tools are used in environments where data exists already, which changes how the tool is utilized.
This isn't a bug - Why I wanted a discussion
This change aligns the provider with other provider best practices for Terraform in that you're not supposed to overwrite or modify or try to create something where something already exists but isn't in the state files. This is a safety precaution standard, and the "best practice", so shouldn't necessarily be reverted.
Discussion
How do we align going forward to ensure that customers hit 409 errors as infrequently as possible? Some of the options I've thought of:
Modules should default to not creating columns
We can update our modules and tools so that they have the ability to create new columns, but so that the creation is disabled by default, and users would only use it on environments where they know data doesn't exist yet. This would require some updates to the downstream modules, and some reworking of the migration tool.
Add an overwrite function/option to the column resource in the provider
We could add the old functionality back in but have it enabled/disabled via a flag in the resource. While it's Terraform best practice not to do this, Honeycomb objects are also not exactly infrastructure either. We'd also have to have this enabled by default to get downstream code working without modifications.
Both options?
Maybe we should do both the rewriting of downstream dependencies and the update of the resource?
Should we have data resources for columns and datasets
Maybe if we have data resources for columns and datasets available those could be used downstream to determine if things should be created or not, or we could modify terraformer to create data resources instead of standard column/dataset resources when doing a terraformer export of a current environment? This would resolve the issue for the migration utility most likely, but downstream modules would still need some reworking most likely.
Conclusion
I just wanted to bring this up, and have some discussion around this with a broader team, so that before I start going and doing a bunch of PRs to push things in a certain direction, a larger group of us agrees on the direction we should be going in.
The text was updated successfully, but these errors were encountered:
This is potentially a more complex thing to solve well than it seems at face value. More time is needed to determine a solid solution prior to the "1.0 milestone" planned for early this year, but in support of improving the experience today:
honeycombio_column: delete column on destroy #258 has been merged and should be released this week as part of 0.13.0. This should reduce some of the pain when using the honeycombio_column resource during active development.
Change being discussed
Prior to version
0.12.0
, the terraform provider handled columns that already existed in a dataset by importing them under the covers, and then operating on them. In0.12.0
the processed changed so that the provider will now receive a409
error from the Honeycomb API and fail out due to that error.Issue caused by change
This has resulted in impact to downstream honeycomb modules, as well as the Honeycomb tools that were developed to help customers migrate from Honeycomb Classic to Honeycomb Environments & Services. We're seeing breakage if these tools are used in environments where data exists already, which changes how the tool is utilized.
This isn't a bug - Why I wanted a discussion
This change aligns the provider with other provider best practices for Terraform in that you're not supposed to overwrite or modify or try to create something where something already exists but isn't in the state files. This is a safety precaution standard, and the "best practice", so shouldn't necessarily be reverted.
Discussion
How do we align going forward to ensure that customers hit
409
errors as infrequently as possible? Some of the options I've thought of:Modules should default to not creating columns
We can update our modules and tools so that they have the ability to create new columns, but so that the creation is disabled by default, and users would only use it on environments where they know data doesn't exist yet. This would require some updates to the downstream modules, and some reworking of the migration tool.
Add an overwrite function/option to the column resource in the provider
We could add the old functionality back in but have it enabled/disabled via a flag in the resource. While it's Terraform best practice not to do this, Honeycomb objects are also not exactly infrastructure either. We'd also have to have this enabled by default to get downstream code working without modifications.
Both options?
Maybe we should do both the rewriting of downstream dependencies and the update of the resource?
Should we have data resources for columns and datasets
Maybe if we have data resources for columns and datasets available those could be used downstream to determine if things should be created or not, or we could modify terraformer to create data resources instead of standard column/dataset resources when doing a terraformer export of a current environment? This would resolve the issue for the migration utility most likely, but downstream modules would still need some reworking most likely.
Conclusion
I just wanted to bring this up, and have some discussion around this with a broader team, so that before I start going and doing a bunch of PRs to push things in a certain direction, a larger group of us agrees on the direction we should be going in.
The text was updated successfully, but these errors were encountered: