You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 25, 2023. It is now read-only.
Accessing Vault to run vault operator init requires to SSH into the Cluster instances, following the autosealing example i've built a use case where i'd like to remove the Keypair access management overhead by using AWS Session Manager.
Since AWS SSM Agent comes already installed on base Linux images when you build your AMI there is no need for extra steps to be setup, but the agent requires the service policy AmazonEC2RoleforSSM.
With the current scenario if i attach this policy using the module output for the iam role i face the problem that the ASG launches an instance before the attachment. The workaround is to create the cluster with a desired count of 0 and then after the role is attached we set it to the desired amount. however it'd be nice to have a flag to enable this policy or to attach it by default since it is a useful feature. Please let me know if there is a downside of this approach
2020-07-14 00:02:07 ERROR Error adding the directory to watcher: no such file or directory
2020-07-14 00:02:07 INFO Entering SSM Agent hibernate - EC2RoleRequestError: no EC2 instance role found
2020-07-14 00:05:48 INFO Got signal:terminated value:0x56374c4cc240
2020-07-14 00:05:48 INFO Stopping agent
2020-07-15 20:38:14 ERROR Error adding the directory to watcher: no such file or directory
2020-07-15 20:38:15 INFO Entering SSM Agent hibernate - AccessDeniedException: User: arn:aws:sts::695292474035:assumed-role/vault-example20200715203639840900000003/i-08377ac995205409e is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:695292474035:instance/i-08377ac995205409e status code: 400, request id: a2e08610-817e-4763-b938-af00b673dc5e
The text was updated successfully, but these errors were encountered:
Not sure I follow. I understand that IAM policy attachment is asynchronous, so I suppose there could be some timing issue where the ASG launches the EC2 instances, but the IAM policy is not yet active on their IAM role... But that should resolve after a few seconds. Is the issue that you're trying to connect immediately, e.g., from a provisioner in your Terraform code?
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Accessing Vault to run
vault operator init
requires to SSH into the Cluster instances, following the autosealing example i've built a use case where i'd like to remove the Keypair access management overhead by using AWS Session Manager.Since AWS SSM Agent comes already installed on base Linux images when you build your AMI there is no need for extra steps to be setup, but the agent requires the service policy
AmazonEC2RoleforSSM
.With the current scenario if i attach this policy using the module output for the iam role i face the problem that the ASG launches an instance before the attachment. The workaround is to create the cluster with a desired count of 0 and then after the role is attached we set it to the desired amount. however it'd be nice to have a flag to enable this policy or to attach it by default since it is a useful feature. Please let me know if there is a downside of this approach
The text was updated successfully, but these errors were encountered: