-
-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
first terraform apply works, but subsequent plans or applys fail #2
Comments
Hey there. Let me first warn you that this provider is not thoroughly tested and might be unsafe in production. :) With that over with.. it looks like it's crashing here on line 153: terraform-provider-ssh/data_source_ssh_tunnel.go Lines 149 to 155 in dfd6e8e
That's when it's trying to send data back to the thing that is using the SSH tunnel, which I think is the terraform consul provider. Any chance you could share your terraform code? The consul server version may be important too. |
Sure. The consul version I'm using: root@jumpbox:~# curl -s http://consul.service.consul:8500/v1/agent/self | jq '.Config.Version'
"0.9.3" My terraform code is below. It's separated into different modules. My ssh tunnel module: provider "ssh" {}
variable "jumpbox" {
default = {
user = ""
host = ""
private_key = ""
}
}
variable "remote_address" {
default = "consul.service.consul:8500"
}
variable "local_address" {
default = "localhost:0"
}
data "ssh_tunnel" "consul" {
user = "${var.jumpbox["user"]}"
host = "${var.jumpbox["host"]}"
private_key = "${var.jumpbox["private_key"]}"
local_address = "${var.local_address}"
remote_address = "${var.remote_address}"
}
output "local_address" {
value = "${data.ssh_tunnel.consul.local_address}"
}
output "local_port" {
value = "${data.ssh_tunnel.consul.port}"
} My "conf" module data "local_file" "confs" {
count = "${length(var.confs)}"
filename = "${path.module}/files/${element(var.confs, count.index)}.conf.tmpl"
}
output "conf_map" {
value = "${zipmap(var.confs, data.local_file.confs.*.content)}"
}
variable "confs" {
default = [
"a1",
"default",
"f4",
"restricted",
]
} My consul module: provider "consul" {
address = "${var.address}"
scheme = "${var.scheme}"
}
variable "address" {
default = ""
}
variable "scheme" {
default = "http"
}
variable "nginx_conf_index" {
default = "0"
}
variable "confs" {
default = {}
}
variable "apps" {
default = {}
}
resource "consul_key_prefix" "confs" {
path_prefix = "nginx/${var.nginx_conf_index}/confs/"
subkeys = "${var.confs}"
}
resource "consul_key_prefix" "apps" {
path_prefix = "nginx/${var.nginx_conf_index}/apps/"
subkeys = "${var.apps}"
} My root module: variable "jumpbox" {
default = {
user = "root"
host = "x.x.x.x"
}
}
variable "jumpbox_private_key_path" {
default = ""
}
variable "local_address" {
default = "localhost:0"
}
variable "nginx_conf_index" {
default = "0"
}
variable "apps" {
default = [
"a1",
"f4",
"restricted",
]
}
variable "confs" {
default = [
"a1",
"default",
"f4",
"restricted",
]
}
data "local_file" "apps" {
count = "${length(var.apps)}"
filename = "${path.root}/files/apps/${element(var.apps, count.index)}.json"
}
data "local_file" "jumpbox_private_key" {
filename = "${var.jumpbox_private_key_path}"
}
module "ssh_consul" {
source = "../../../consul-nginx/ssh"
jumpbox = "${merge(var.jumpbox, map("private_key", data.local_file.jumpbox_private_key.content))}"
remote_address = "consul.service.consul:8500"
local_address = "${var.local_address}"
}
module "confs" {
source = "../../../consul-nginx/confs"
confs = "${var.confs}"
}
module "consul" {
source = "../../../consul-nginx/consul"
address = "${module.ssh_consul.local_address}"
scheme = "http"
confs = "${module.confs.conf_map}"
apps = "${zipmap(var.apps, data.local_file.apps.*.content)}"
nginx_conf_index = "${var.nginx_conf_index}"
} |
I tried some other providers (marathon and vault) And they work fine with the ssh provider, with no issues on subsequent applys and plans, so this really seems like a particular issue with the consul provider. |
First off, thanks for making this provider, I've been wanting to have this functionality in terraform for a while.
I'm trying to use this module to maintain keys in a consul cluster that sits behind a jumpbox. It works great the first time, but when attempting to run
terraform plan
orterraform apply
a second time, terraform crashes.terraform version info:
Here's what I'm seeing:
The text was updated successfully, but these errors were encountered: