_ _ _
| |__ __ _| |_ _ __ _ _ __ __| |
| '_ \ / _` | | | | |/ _` | '__/ _` |
| | | | (_| | | |_| | (_| | | | (_| |
|_| |_|\__,_|_|\__, |\__,_|_| \__,_|
|___/
This document is made of three key sections, and we recommend reading them in order.
In order to understand how Halyard will tackle the problems we will describe below, it is valuable to have a mental model of how it will be architected.
Since we want to provide both a generic interface to make configuration requests against, without commiting to requiring a CLI, a web UI, a packaged library, etc... we will start by writing Halyard as a daemon, listening to requests on port 80641. This allows the content of those requests to be generated by a user-friendly interface or library which will be described in a separate document.
Currently, it is not easy to setup and operate non-trivial installations of Spinnaker. It requires knowledge of the behavior and operation of many independently configured services - and even with that knowledge - there is no existing solution for distributing or valdating configuration, updating Spinnaker's stateful services, or interacting with Spinnaker's API outside of the prebuilt UI. The goal of Halyard is to provide a solution for all of this. To understand what Halyard will become, and how it will address these problems, we will separate the concerns of Halyard into two stages.
Since this document focuses only on the Halyard daemon, it won't be immediately clear what the value the config files documented here bring to the user, since the end goal is to simplify config after all, not create more of it. However, these files are really meant to be read and modified only by the daemon, whereas the user would modify their config with a series of commands like this:
$ hal add-account --help
You need to specify
--provider (kubernetes, aws, or google)
--name (a human readable name for this account)
as well as some provider specific information you will be prompted for.
$ hal add-account --provider kubernetes --name prod
The kubernetes provider requires
--context
$ hal add-account --provider kubernetes --name prod --context <TAB><TAB>
gke_us-central-1 gke_us-east-1 # tab completed entries
gke_us-west-1 gke_us-east-2
$ hal add-account --provider kubernetes --name prod --context gke_us-central-1
Account added validated & added successfully.
$ hal add-account --provider kubernetes --name prod --context gke_us-central-3
Account "prod" could not be added:
- Context "gke_us-central-3" is not a valid entry in ~/.kube/config.
- Account "prod" for provider "kubernetes" already exists.
$ hal list-accounts --provider kubernetes
- prod
- staging
- test
$ hal update --service clouddriver
Updating clouddriver configuration...
Creating new clouddriver cluster...
Spinning down old clouddriver cluster...
Update successful
Previously, the update path has failed for users in the following places:
-
A) Understanding all of the configuration options availabe to the Kubernetes provider (there are at least 10 or so) after finding the correct documentation.
-
B) Understanding how to add multiple accounts to Spinnaker (this comes up weekly in the slack channel).
-
C) Realizing that they have to edit
clouddriver-local.yaml
rather thanspinnaker-local.yaml
, because only the former supports multiple accounts per-provider. -
D) Validating that the context they have chosen actually works. This wouldn't be discovered until after clouddriver has been restarted, and the account fails to show up with an error message deep inside the logs.
These are common issues that happen with each provider, and provide a serious user experience problem when running Spinnaker. Halyard's objective is to programatically overcome these issues.
The first stage in Halyards development will involve three parts.
-
1.1) Versioning Spinnaker
In order to have confidence in the Spinnaker installation being deployed, we need to pin specific versions of Spinnaker microservices, as well as the dependencies they require in a Bill of Materials (BOM). We propose that the schema looks like this:
version: 1.4.0
services: # one entry for every service
clouddriver:
version: 1.320.0 # corresponds to travis-ci spinnaker release
dependencies: # list of name/version pairs
- name: redis
version: >2.0 # it is worth exploring version ranges here
orca: ...
While the first iteration of Halyard development (configuring Spinnaker) will not be able to deploy Spinnaker with the specified versions, it is important that we pin sets of Spinnaker configuration to sets of Spinnaker service versions by means of a single version number.
This BOM should never need to exist on any machine deploying or running Spinnaker, as it only needs to be readable by Halyard at deployment/configuration time, meaning it could be hosted at a publicly readable web endpoint. However, the Spinnaker version itself will be present on whatever machine is running Halyard to inform it of what configuration to read and what to deploy.
The key takeaway here is that a Spinnaker version points to a BOM, and that Halyard therefore only needs the Spinnaker version to determine the version of all its dependencies
The idea is to have a single place that shared, authoritative Spinnaker
configuration can be downloaded from. This will ultimately replace the
configuration in
spinnaker/config
by storing each *.yaml
file in a single versioned bucket (S3/GCS). The bucket
version will be mapped to a Spinnaker version to make it simple for Halyard to
fetch the correct configuration. The actual set of configuration will never
need to be stored on the maching running Halyard, only staged there during
distribution of the configuration. This configuration (alongside user edits)
will be placed baked into the VMs running Spinnaker services.
This will be the most challenging part of Halyard's first phase of development. In order to do this correctly, let's first list some goals:
-
a) The user should never have to open a text editor to write or edit Spinnaker confiuration
-
b) If the user does want to hand-edit configuration, Halyard should not interfere with that (but it will be an advanced use-case, and shall be treated as such).
-
c) Halyard should enable a user to configure multiple instances of Spinnaker all from the same machine.
-
d) It should be easy to extend Halyard to accept new config options.
To achieve these goals, Halyard will take a two-step approach to generating Spinnaker configuration:
-
a) Receive a number of user commands (add an account, add a trigger, etc...) and store the resulting output in the
~/.hal/config
file. -
b) Reading configuration from
~/.hal/config
and from the specified Spinnaker version, write out all Spinnaker configuration to~/.spinnaker
(the default configuration directory).
Before exploring the semantics of the individual Halyard commands, let's look
at the ~/.hal
directory.
The directory structure will look something like this:
.hal/
config # all halyard spinnaker entries
spinnaker-api-team/ # optional directory with per-spinnaker config
clouddriver-local.yaml # optional -<profile>.yaml files with overrides
spinnaker-ml-team/
clouddriver-local.yaml
The takeaway for the above diagram is that only ~/.hal/config
is required and
read by Halyard, and that for each separate installation of Spinnaker you can
optionally provide your own *-<profile>.yaml
files for further configuration.
The things configured in these *-<profile>.yaml
files should not conflict
with any configured in the below ~/.hal/config
.
The contents of ~/.hal/config
will look like this:
halyard-version: 1.0.0
current-deployment: spinnaker-api-team # which deployment to operate on
deployment-configuration:
- name: spinnaker-api-team
version: 1.4.0 # Spinnaker version
providers: &clouddriver # anchor referenced in clouddriver.yaml
kubernetes: # provider-specific details
enabled: true
accounts:
- name: my-kubernetes-account
context: ...
- name: my-other-kubernetes-account
google: &google
enabled: false
accounts:
- name: ...
webhooks: &igor # anchor referenced in igor.yaml
jenkins: # CI-specific details
enabled: true
accounts:
- name: cloudbees
address: ...
- name: spinnaker-ml-team
accounts: ...
The anchors referenced
above (e.g. &clouddriver
) there will reference entries in each canned
*.yaml
file like so:
# clouddriver.yaml
<<: *clouddriver # This merges all clouddriver entries in ~/.hal/config
Now that we know what will be stored in the ~/.hal
directory, we need to
explain how to generate the contents of the ~/.spinnaker
directory.
-
a) For the current deployment (selected by
current-deployment
in~/.hal/config
), Halyard will download all configuration for the given version number, and prepend thecurrent-deployment
'sdeployment-configuration
into each canned configuration file (e.g.clouddriver.yaml
). -
b) Halyard will copy the entries in
~/.hal/<deployment>/*-<profile>.yaml
into~/.spinnaker
.
Notice that so far we have just defined a more general spinnaker-local.yaml
,
which alone is not interesting. The reason this is necessary is that
spinnaker-local.yaml
prevented us from refering to nested dictionaries in
the way the anchors in conjunction with ~/.hal/config
do now. With this in
place, we can fully configure Spinnaker's core options (providers, git,
webhooks, etc...) in a centralized file. The ability to add additional
*-<profile>.yaml
files exists only to cover config options that are very
unlikely to be touched by users of Spinnaker.
To drive home the interaction of the *-<profile>.yaml
files, the
~/.hal/config
files, and the CVM consider the following example:
We have the following directory structure:
.hal/
config
spinnaker-api-team/
clouddriver-local.yaml
# ~/.hal/config
halyard-version: 1.0.0
current-deployment: spinnaker-api-team
deployment-configuration:
- name: spinnaker-api-team
version: 1.4.0
providers: &clouddriver
kubernetes:
enabled: true
accounts:
- name: my-kubernetes-account
context: gke_1
# ~/.hal/spinnaker-api-team/clouddriver-local.yaml
retrofit:
loglevel: FULL
redis:
scheduler: sort
And with a canned clouddriver configuration:
# clouddriver.yaml
# spinnaker v1.4.0
<<: *clouddriver
# ... lots of details
When Halyard generates this config, it creates the following ~/.spinnaker
directory.
.spinnaker/
clouddriver.yaml
clouddriver-local.yaml
orca.yaml
gate.yaml
rosco.yaml
... # remaining services.yaml
With the contents of clouddriver-local.yaml
:
# ~/.spinnaker/clouddriver-local.yaml
retrofit:
loglevel: FULL
redis:
scheduler: sort
And the contents of clouddriver.yaml
:
# clouddriver.yaml
# spinnaker v1.4.0
providers: &clouddriver
kubernetes:
enabled: true
accounts:
- name: my-kubernetes-account
context: gke_1
<<: *clouddriver
# ... lots of details
Halyard will only ever make changes to ~/.hal/config
, leaving it up to the
user to make optional changes to ~/.hal/<spinnaker-deployment>/*
. Below is
a list of HTTP methods to be taken against the Halyard daemon by a client.
METHOD | PATH /<version>/<deployment>/<type>/<name> |
BODY | DESCRIPTION |
---|---|---|---|
POST |
/accounts |
account description | create new account |
POST |
/webhooks |
webhook description | create new webhook |
PUT |
/enabled |
boolean | enable/disable account |
PUT |
/enabled |
boolean | enable/disable webhook |
PUT |
/accounts/<account> |
account description | edit account |
PUT |
/webhooks/<webhook> |
account description | edit webhook |
DELETE |
/accounts/<account> |
delete account | |
DELETE |
/webhooks/<webhook> |
delete webhook | |
GET |
* |
return everything matching this path |
<version>
: Halyard version.
<deployment>
: Name of the spinnaker deployment.
<type>
: (provider
|webhook
|git
).
<name>
: name ofprovider
,webhook
, orgit
entry.
Once an incoming change has been processed, the resulting config will always be
run through a validator to ensure that Spinnaker can still be deployed with
what's been provided. If the validator passes, the configuration is written out
to ~/.spinnaker
, as described above. If it fails, the operation will be
rejected with an error message describing why.
See this document
-
- Replace the Gate API. However, it may be worth having the Halyard CLI provide some very rudimentary operations for starting & examining pipeline state.
[1] 9th factorial divided by the 9th triangle number. Also happens to be an unassigned port close to the range used by other Spinnaker services.