-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build an AMI #4
Build an AMI #4
Conversation
The AMI now includes Clojure and common Clojure tooling (i.e. Boot and Leiningen). Support for both CodeDeploy and CloudFront are included (although not configured or enabled out of the box), and packages from the AUR can be pulled down and cached in a local repo.
I'd consider using something like Ansible to provision these changes in future. Doing everything with a set of Bash scripts is already a little painful, and will only get worse as complexity increases. |
arch.json
Outdated
@@ -1,35 +1,63 @@ | |||
{ | |||
"variables": {}, | |||
"_comment": "Keys prefixed with an underscore are comments.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I don't have any comments I could remove this, but maybe it'll help someone down the line.
# Default: put built package and cached source in build directory | ||
# | ||
#-- Destination: specify a fixed directory where all packages will be placed | ||
PKGDEST=/var/cache/pacman/juxt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file is in here so I can point makepkg
at the new Pacman cache dir where local PKGBUILD
and anything from AUR will end up.
You can query packages not in 'core' Arch repos like so:
pacman -Sl juxt
# %wheel ALL=(ALL) ALL | ||
|
||
## Same thing without a password | ||
%wheel ALL=(ALL) NOPASSWD: ALL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Passwordless sudo so we can install things more easily during deployment. Getting makepkg
to work as the rock
user when it asks to sudo
things is not straightforward.
scripts/install-base.sh
Outdated
# Rock user | ||
|
||
# Group for running pacman commands without needing to enter a password. | ||
groupadd build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Build group can go as I'm no longer using it.
install_pkgs \ | ||
jq \ | ||
pacutils \ | ||
repose |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
repose
is on the way out and people are moving to repoctl
I'm told. Something to look out for in the not too distant future.
# We skip the PGP check because we won't have AladW's GPG key at this point. | ||
# | ||
# TODO Pull in GPG key and use it to check package. | ||
su --preserve-environment -c 'makepkg --skippgpcheck' rock |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Skipping PGP checks is me being lazy. It's the one package, but we could import AladW
's key and do this right perhaps?
scripts/install-custom.sh
Outdated
systemd-cloud-watch | ||
|
||
# To install any packages from the AUR can now be done like so: | ||
# aursync <pkg0> <pkg1> <pkg2> ... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment isn't clear enough. You still have to get Pacman to put the package contents in the right place. This only installs in as much as it builds the package and adds it to your Pacman cache and the JUXT package repo.
all: $(DIRS) | ||
|
||
$(DIRS): force | ||
cd $@ && makepkg --printsrcinfo > .SRCINFO |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Easier than escaping the redirect in the shell script that does the provisioning.
terraform/main.tf
Outdated
user_data = "${data.template_file.user_data.rendered}" | ||
|
||
# This needs to be created manually. | ||
key_name = "rock" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've created this key in the JUXT AWS account, and can share it with you, @malcolmsparks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sent your way.
terraform plan -out proposed.plan | ||
|
||
apply: | ||
terraform apply proposed.plan |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Plan looks reasonable, but I don't want to apply this anywhere near customer data etc.
@malcolmsparks do you have somewhere I can test this out tomorrow? If I can get an example user data script that configures things and starts up the services I think this work will be more or less done.
terraform fmt
terraform plan -out proposed.plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.template_file.user_data: Refreshing state...
data.aws_ami.rock: Refreshing state...
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ aws_instance.rock
id: <computed>
ami: "ami-cec09ab7"
associate_public_ip_address: <computed>
availability_zone: "eu-west-1"
ebs_block_device.#: <computed>
ephemeral_block_device.#: <computed>
get_password_data: "false"
instance_state: <computed>
instance_type: "t2.small"
ipv6_address_count: <computed>
ipv6_addresses.#: <computed>
key_name: "rock"
network_interface.#: <computed>
network_interface_id: <computed>
password_data: <computed>
placement_group: <computed>
primary_network_interface_id: <computed>
private_dns: <computed>
private_ip: <computed>
public_dns: <computed>
public_ip: <computed>
root_block_device.#: "1"
root_block_device.0.delete_on_termination: "true"
root_block_device.0.volume_id: <computed>
root_block_device.0.volume_size: "20"
root_block_device.0.volume_type: <computed>
security_groups.#: <computed>
source_dest_check: "true"
subnet_id: <computed>
tags.%: "2"
tags.Name: "rock"
tags.Source: "juxt/rock"
tenancy: <computed>
user_data: "8aed11c63960f4590c9cdd426593ecba98c4ab69"
volume_tags.%: <computed>
vpc_security_group_ids.#: <computed>
+ aws_security_group.allow_all
id: <computed>
arn: <computed>
description: "Allow all inbound traffic."
egress.#: <computed>
ingress.#: "1"
ingress.482069346.cidr_blocks.#: "1"
ingress.482069346.cidr_blocks.0: "0.0.0.0/0"
ingress.482069346.description: ""
ingress.482069346.from_port: "0"
ingress.482069346.ipv6_cidr_blocks.#: "0"
ingress.482069346.protocol: "-1"
ingress.482069346.security_groups.#: "0"
ingress.482069346.self: "false"
ingress.482069346.to_port: "0"
name: "allow_all"
owner_id: <computed>
revoke_rules_on_delete: "false"
tags.%: "2"
tags.Name: "allow_all"
tags.Source: "juxt/rock"
vpc_id: <computed>
Plan: 2 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
This plan was saved to: proposed.plan
To perform exactly these actions, run the following command to apply:
terraform apply "proposed.plan"
terraform/rock.sh
Outdated
EOF | ||
|
||
systemctl daemon-reload | ||
systemctl start systemd-cloud-watch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only logging for now. I've got to look at CodeDeploy once I've got journald
forwarding nicely.
terraform/main.tf
Outdated
name = "name" | ||
|
||
# Maybe we should rename the AMI to `juxt-rock`? | ||
values = ["juxt-arch-*"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I should rename the AMIs to juxt-rock-*
?
I used Arch because I was thinking about splitting the AMIs in two; one for a more-or-less empty Arch AMI with up-to-date everything and a custom repo for AUR packages, and one for the Clojure stuff we want installed…
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've completed the rename in 30be5f7.
AMIs will now be named using the format "juxt-rock-{git-sha}-{timestamp}".
As an alternative to the somewhat broken Go client this script is being used in production and as such we know it works. It isn't in any accessible version control so I've vendored it inside the package.
This prevents potential conflicts and permission issues.
I've added the Bash script used elsewhere in production to push journald logs over to CloudWatch, and packaged it up. It's installed in the AMI I'm building presently ( systemctl start journald-cloud-watch-script Internally that runs the provided Bash script, which can be found in I haven't had time to test out the script so can't say for sure it will work if you provision an EC2 instance via the included Terraform configuration, which can be done like so: cd terraform
make plan
# Review the output to make sure it's correct.
make apply Note, Terraform state is only stored locally, although pushing state to S3 is relatively straightforward should you wish to do so. I've destroyed everything via HTH! |
"source_ami": "ami-0b8ec472", | ||
"instance_type": "t2.small", | ||
"ssh_username": "root", | ||
"ami_name": "juxt-rock-{{user `commit_ref`}}-{{timestamp}}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To make the AMI public you need to add "ami_groups": ["all"]
.
# Install | ||
install_pkgs \ | ||
codedeploy-agent \ | ||
journald-cloud-watch-script \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the Bash script that will forward logs as an alternative to the Go project that doesn't seem to work.
|
||
INSTANCE_ID="${INSTANCE_ID:-$(curl -s http://169.254.169.254/latest/meta-data/instance-id)}" | ||
AWS_REGION="${AWS_REGION:-$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)}" | ||
LOG_GROUP_NAME="${LOG_GROUP_NAME:-rock}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These env vars could be set in an environment file via user data, perhaps?
# Looks like there's a but that results in a lot of log output from this library. | ||
# | ||
# Fix is in this PR: https://github.com/advantageous/systemd-cloud-watch/pull/16 | ||
# systemctl start systemd-cloud-watch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the user data file that will be executed inside the EC2 instance when you provision with Terraform.
Starting services, adding configuration files goes here.
The AMI now includes Clojure and common Clojure tooling (i.e. Boot and
Leiningen).
Support for both CodeDeploy and CloudFront are included (although not configured
or enabled out of the box), and packages from the AUR can be pulled down and
cached in a local repo.
Tasks
Make sure CodeDeploy isn't logging to a fileWrite CodeDeploy logs to journald #7