Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

first terraform apply works, but subsequent plans or applys fail #2

Open
tomwganem opened this issue Mar 21, 2018 · 3 comments
Open

Comments

@tomwganem
Copy link

tomwganem commented Mar 21, 2018

First off, thanks for making this provider, I've been wanting to have this functionality in terraform for a while.

I'm trying to use this module to maintain keys in a consul cluster that sits behind a jumpbox. It works great the first time, but when attempting to run terraform plan or terraform apply a second time, terraform crashes.

terraform version info:

$ terraform version
Terraform v0.11.5
+ provider.consul v1.0.0
+ provider.local v1.1.0
+ provider.ssh (unversioned)

Here's what I'm seeing:

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.local_file.jumpbox_private_key: Refreshing state...
data.local_file.confs[3]: Refreshing state...
data.local_file.confs[0]: Refreshing state...
data.local_file.confs[1]: Refreshing state...
data.local_file.apps[2]: Refreshing state...
data.local_file.apps[1]: Refreshing state...
data.local_file.confs[2]: Refreshing state...
data.local_file.apps[0]: Refreshing state...
data.ssh_tunnel.consul: Refreshing state...
consul_key_prefix.confs: Refreshing state... (ID: nginx/3/confs/)
consul_key_prefix.apps: Refreshing state... (ID: nginx/3/apps/)

------------------------------------------------------------------------

Error: Error running plan: 1 error(s) occurred:

* module.ssh.provider.ssh: connection is shut down


panic: read tcp 127.0.0.1:57681->127.0.0.1:57684: read: connection reset by peer
2018-03-21T16:25:07.639-0700 [DEBUG] plugin.terraform-provider-ssh:
2018-03-21T16:25:07.639-0700 [DEBUG] plugin.terraform-provider-ssh: goroutine 76 [running]:
2018-03-21T16:25:07.639-0700 [DEBUG] plugin.terraform-provider-ssh: main.dataSourceSSHTunnelRead.func1.1(0xc420010ff0, 0x1fa4f20, 0xc42016b020, 0x1fa4fe0, 0xc4202f80c8)
2018-03-21T16:25:07.639-0700 [DEBUG] plugin.terraform-provider-ssh:     /Users/tomwganem/go/src/github.com/stefansundin/terraform-provider-ssh/data_source_ssh_tunnel.go:153 +0x114
2018-03-21T16:25:07.639-0700 [DEBUG] plugin.terraform-provider-ssh: created by main.dataSourceSSHTunnelRead.func1
2018-03-21T16:25:07.639-0700 [DEBUG] plugin.terraform-provider-ssh:     /Users/tomwganem/go/src/github.com/stefansundin/terraform-provider-ssh/data_source_ssh_tunnel.go:150 +0xcc
2018/03/21 16:25:07 [DEBUG] Attaching resource state to "data.local_file.apps": &terraform.ResourceState{Type:"local_file", Dependencies:[]string{}, Primary:(*terraform.InstanceState)(0xc420422dc0), Deposed:[]*terraform.InstanceState{}, Provider:"provider.local", mu:sync.Mutex{state:0, sema:0x0}}
2018/03/21 16:25:07 [DEBUG] Attaching resource state to "data.local_file.jumpbox_private_key": &terraform.ResourceState{Type:"local_file", Dependencies:[]string{}, Primary:(*terraform.InstanceState)(0xc420422d20), Deposed:[]*terraform.InstanceState{}, Provider:"provider.local", mu:sync.Mutex{state:0, sema:0x0}}
2018/03/21 16:25:07 [DEBUG] Attaching resource state to "module.ssh.data.ssh_tunnel.consul": &terraform.ResourceState{Type:"ssh_tunnel", Dependencies:[]string{}, Primary:(*terraform.InstanceState)(0xc420423590), Deposed:[]*terraform.InstanceState{}, Provider:"module.ssh.provider.ssh", mu:sync.Mutex{state:0, sema:0x0}}
2018/03/21 16:25:07 [DEBUG] Attaching resource state to "module.confs.data.local_file.confs": &terraform.ResourceState{Type:"local_file", Dependencies:[]string{}, Primary:(*terraform.InstanceState)(0xc420423090), Deposed:[]*terraform.InstanceState{}, Provider:"provider.local", mu:sync.Mutex{state:0, sema:0x0}}
2018/03/21 16:25:07 [DEBUG] Attaching resource state to "module.consul.consul_key_prefix.confs": &terraform.ResourceState{Type:"consul_key_prefix", Dependencies:[]string{}, Primary:(*terraform.InstanceState)(0xc420423450), Deposed:[]*terraform.InstanceState{}, Provider:"module.consul.provider.consul", mu:sync.Mutex{state:0, sema:0x0}}
2018/03/21 16:25:07 [TRACE] Graph after step *terraform.AttachStateTransformer:
@stefansundin
Copy link
Owner

Hey there. Let me first warn you that this provider is not thoroughly tested and might be unsafe in production. :)

With that over with.. it looks like it's crashing here on line 153:

// Send traffic from the SSH server -> local program
go func() {
_, err = io.Copy(sshConn, localConn)
if err != nil {
panic(err)
}
}()

That's when it's trying to send data back to the thing that is using the SSH tunnel, which I think is the terraform consul provider.

Any chance you could share your terraform code? The consul server version may be important too.

@tomwganem
Copy link
Author

Sure.

The consul version I'm using:

root@jumpbox:~# curl -s http://consul.service.consul:8500/v1/agent/self | jq '.Config.Version'
"0.9.3"

My terraform code is below. It's separated into different modules.

My ssh tunnel module:

provider "ssh" {}
variable "jumpbox" {
  default = {
    user = ""
    host = ""
    private_key = ""
  }
}
variable "remote_address" {
  default = "consul.service.consul:8500"
}
variable "local_address" {
  default = "localhost:0"
}
data "ssh_tunnel" "consul" {
  user            = "${var.jumpbox["user"]}"
  host            = "${var.jumpbox["host"]}"
  private_key     = "${var.jumpbox["private_key"]}"
  local_address   = "${var.local_address}"
  remote_address  = "${var.remote_address}"
}
output "local_address" {
  value = "${data.ssh_tunnel.consul.local_address}"
}
output "local_port" {
  value = "${data.ssh_tunnel.consul.port}"
}

My "conf" module

data "local_file" "confs" {
  count    = "${length(var.confs)}"
  filename = "${path.module}/files/${element(var.confs, count.index)}.conf.tmpl"
}
output "conf_map" {
  value = "${zipmap(var.confs, data.local_file.confs.*.content)}"
}
variable "confs" {
  default = [
    "a1",
    "default",
    "f4",
    "restricted",
  ]
}

My consul module:

provider "consul" {
  address    = "${var.address}"
  scheme     = "${var.scheme}"
}
variable "address" {
  default = ""
}
variable "scheme" {
  default = "http"
}
variable "nginx_conf_index" {
  default = "0"
}
variable "confs" {
  default = {}
}
variable "apps" {
  default = {}
}
resource "consul_key_prefix" "confs" {
  path_prefix = "nginx/${var.nginx_conf_index}/confs/"
  subkeys = "${var.confs}"
}
resource "consul_key_prefix" "apps" {
  path_prefix = "nginx/${var.nginx_conf_index}/apps/"
  subkeys = "${var.apps}"
}

My root module:

variable "jumpbox" {
  default = {
    user = "root"
    host = "x.x.x.x"
  }
}
variable "jumpbox_private_key_path" {
  default = ""
}
variable "local_address" {
  default = "localhost:0"
}
variable "nginx_conf_index" {
  default = "0"
}
variable "apps" {
  default = [
    "a1",
    "f4",
    "restricted",
  ]
}
variable "confs" {
  default = [
    "a1",
    "default",
    "f4",
    "restricted",
  ]
}
data "local_file" "apps" {
  count    = "${length(var.apps)}"
  filename = "${path.root}/files/apps/${element(var.apps, count.index)}.json"
}
data "local_file" "jumpbox_private_key" {
  filename = "${var.jumpbox_private_key_path}"
}
module "ssh_consul" {
  source = "../../../consul-nginx/ssh"
  jumpbox =  "${merge(var.jumpbox, map("private_key", data.local_file.jumpbox_private_key.content))}"
  remote_address = "consul.service.consul:8500"
  local_address = "${var.local_address}"
}
module "confs" {
  source = "../../../consul-nginx/confs"
  confs = "${var.confs}"
}
module "consul" {
  source = "../../../consul-nginx/consul"
  address = "${module.ssh_consul.local_address}"
  scheme = "http"
  confs = "${module.confs.conf_map}"
  apps = "${zipmap(var.apps, data.local_file.apps.*.content)}"
  nginx_conf_index = "${var.nginx_conf_index}"
}

@tomwganem
Copy link
Author

I tried some other providers (marathon and vault) And they work fine with the ssh provider, with no issues on subsequent applys and plans, so this really seems like a particular issue with the consul provider.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants