diff --git a/CHANGELOG.md b/CHANGELOG.md index 1476236b3..279f59c46 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,30 @@ +# 2020-01-09 + +## Features + + - Helm charts: + - gundeck: set soft limit to active max concurrent push metrics (#165) + - backoffice: add missing backoffice second pod to offline download (#166) + - nginz: sanitize access tokens from logs (#169) + - brig: branding defaults to simplify customization (#168) + - brig: added new config options (#173) + - aws-ingress: added team settings and account pages (#42) + - team-settings: updated to latest app (#175) + - webapp: updated to latest app (#175) + - account-pages: updated to latest app (#175) + +## Other updates +- Standardise docs to use example.com everywhere (#161, #172) +- Cleaned up and moved docs around to wire-docs (#157) + +## Breaking changes / known issues when upgrading + +- None known + +## Bug fixes +- Minor mc usage fix for minio + + # 2019-09-30 #162 ## Features diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index be9d66b97..3cc09b7b2 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -11,9 +11,9 @@ This is an open source project and we are happy to receive contributions! Improv If submitting pull requests, please follow these guidelines: * if you want to make larger changes, it might be best to first open an issue to discuss the change. -* if helm charts are involved, +* if helm charts are involved, * use the `./bin/update.sh ./charts/` script, to ensure changes in a subchart (e.g. brig) are correctly propagated to the parent chart (e.g. wire-server) before linting or installing. - * ensure they pass linting, you can check with `helm lint -f path/to/extra/values-file.yaml charts/mychart`. + * ensure they pass linting, you can check with `helm lint -f path/to/extra/values-file.yaml charts/mychart`. * If you can, try to also install the chart to see if they work the way you intended. If you find yourself wishing for a feature that doesn't exist, open an issue on our issues list on GitHub which describes the feature you would like to see, why you need it, and how it should work. diff --git a/README.md b/README.md index eeb754bd0..8f58f4628 100644 --- a/README.md +++ b/README.md @@ -12,261 +12,35 @@ No license is granted to the Wire trademark and its associated logos, all of whi ## Introduction -This repository contains code and documentation on how to deploy [wire-server](https://github.com/wireapp/wire-server). To allow a maximum of flexibility with respect to where wire-server can be deployed (e.g. with cloud providers like AWS, on bare-metal servers, etc), we chose [kubernetes](https://kubernetes.io/) as the target platform. +This repository contains the code and configuration to deploy [wire-server](https://github.com/wireapp/wire-server) and [wire-webapp](https://github.com/wireapp/wire-webapp), as well as dependent components, such as cassandra databases. To allow a maximum of flexibility with respect to where wire-server can be deployed (e.g. with cloud providers like AWS, on bare-metal servers, etc), we chose [kubernetes](https://kubernetes.io/) as the target platform. -This means you first need to install a kubernetes cluster, and then deploy wire-server onto that kubernetes cluster. +## Documentation - +All the documentation on how to make use of this repository is hosted on https://docs.wire.com - refer to the Administrator's Guide. -* [Status](#status) -* [Prerequisites](#prerequisites) - * [Required server resources](#required-server-resources) -* [Contents of this repository](#contents-of-this-repository) -* [Development setup](#development-setup) - * [If you're a maintainer of wire-server-deploy](#if-youre-a-maintainer-of-wire-server-deploy) -* [Installing wire-server](#installing-wire-server) - * [Demo installation](#demo-installation) - * [Install non-persistent, non-highly-available databases](#install-non-persistent-non-highly-available-databases) - * [Install AWS service mocks](#install-aws-service-mocks) - * [Install a demo SMTP server](#install-a-demo-smtp-server) - * [Install wire-server](#install-wire-server) - * [Adding a load balancer, DNS, and SSL termination](#adding-a-load-balancer-dns-and-ssl-termination) - * [Beyond the demo](#beyond-the-demo) - * [Support with a production on-premise (self-hosted) installation](#support-with-a-production-on-premise-self-hosted-installation) -* [Monitoring](#monitoring) -* [Troubleshooting](#troubleshooting) +## Contents - +* `ansible/` contains ansible roles and playbooks to install kuberentes, cassandra, etc. See the [Administrator's Guide](https://docs.wire.com) for more info. +* `charts/` contains helm charts that can be installed on kubernetes. The charts are mirroed to S3 and can be used with `helm repo add wire https://s3-eu-west-1.amazonaws.com/public.wire.com/charts`. See the [Administrator's Guide](https://docs.wire.com) for more info. +* `terraform/` contains some examples for provisioning servers. See the [Administrator's Guide](https://docs.wire.com) for more info. +* `bin/` contains some helper bash scripts. Some are used in the [Administrator's Guide](https://docs.wire.com) when installing wire-server, and some are used for developers/maintainers of this repository. -## Status +## For Maintainers of wire-server-deploy -Code in this repository should be considered **alpha**. We do not (yet) run our production infrastructure on kubernetes. +### git branches -Supported features: +* `master` branch is the production branch and the one where helm charts are mirrored to S3, and recommended for use. The helm chart mirror can be added as follows: `helm repo add wire https://s3-eu-west-1.amazonaws.com/public.wire.com/charts` +* `develop` is bleeding-edge, your PRs should branch from here. There is a mirror to S3 you can use if you need to use bleeding edge helm charts: `helm repo add wire-develop https://s3-eu-west-1.amazonaws.com/public.wire.com/charts-develop`. Note that versioning here is done with git tags, not actual git commits, in order not to pollute the git history. -- wire-server (API) - - [x] user accounts, authentication, conversations - - [x] assets handling (images, files, ...) - - [x] (disabled by default) 3rd party proxying - - [x] notifications over websocket - - [ ] notifications over [FCM](https://firebase.google.com/docs/cloud-messaging/)/[APNS](https://developer.apple.com/notifications/) push notifications - - [ ] audio/video calling ([TURN](https://en.wikipedia.org/wiki/Traversal_Using_Relays_around_NAT)/[STUN](https://en.wikipedia.org/wiki/STUN) servers using [restund](https://github.com/wireapp/restund)) -- wire-webapp - - [x] fully functioning web client (like `https://app.wire.com`) -- wire-team-settings - - [x] team management (including invitations, requires access to a private repository) -- wire-account-pages - - [x] user account management (password reset, requires access to a private repository) +### developing charts +For local development, instead of `helm install wire/`, use -## Prerequisites - -As a minimum for a demo installation, you need: - -* a **Kubernetes cluster** with enough resources. There are [many different options](https://kubernetes.io/docs/setup/pick-right-solution/). A tiny subset of those solutions we tried include: - * if using AWS, you may want to look at: - * [EKS](https://aws.amazon.com/eks/) (if you're okay having all your data in one of the EKS-supported US regions) - * [kops](https://github.com/kubernetes/kops) - * if using regular physical or virtual servers: - * [kubespray](https://github.com/kubernetes-incubator/kubespray) -* a **Domain Name** under your control and the ability to set DNS entries -* the ability to generate **SSL certificates** for that domain name - * you could use e.g. [Let's Encrypt](https://letsencrypt.org/) - -### Required server resources - -* For an ephemeral in-memory demo-setup - * a single server with 8 CPU cores, 32GB of memory, and 20GB of disk space is sufficient. -* For a production setup, you need at least 3 servers. For an optimal setup, more servers are required, it depends on your environment. - -## Contents of this repository - -* `bin/` - some helper bash scripts -* `charts/` - so-called "[helm](https://www.helm.sh/) charts" - templated kubernetes configuration in YAML -* `docs/` - further documentation -* `values/` - example override values to helm charts - -## Development setup - -You need to install - -* [helm](https://docs.helm.sh/using_helm/#installing-helm) (v2.11.x is known to work) -* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) (v1.12.x is known to work) - -and you need to configure access to a kubernetes cluster (minimum v1.9+, 1.12+ recommended). - -For any of the listed `helm install` or `helm upgrade` commands in the documentation, you must first enable the wire charts helm repo (a mirror of this github repository hosted publicly on AWS's S3) - -```shell -helm repo add wire https://s3-eu-west-1.amazonaws.com/public.wire.com/charts -``` - -(You can see available charts by running `helm search wire/`. To see new versions as time passes, you may need to run `helm repo update`) - -Optionally, if working in a team and you'd like to share `secrets.yaml` files between developers using a private git repository and encrypted files, you may wish to install - -* [sops](https://github.com/mozilla/sops) -* [helm-secrets plugin](https://github.com/futuresimple/helm-secrets) - -### If you're a maintainer of wire-server-deploy - -see [maintainers.md](docs/maintainers.md) - -## Installing wire-server - -### Demo installation - -* AWS account not required -* Requires only a kubernetes cluster - -The demo setup is the easiest way to install a functional wire-server with limitations (such as no persistent storage, no high-availability, missing features). For the purposes of this demo, we assume you **do not have an AWS account**. Try this demo first before trying to configure a more complicated setup involving persistence and higher availability. - -(Alternatively, you can run replace `wire/` with `charts/` in all subsequent commands. This will read charts from your local file system. Make sure your working directory is the root of this repo, and that after changing any of the chart files, you run `./bin/update.sh` on them.) - -*For all the following `helm upgrade` commands, it can be useful to run a second terminal with `kubectl --namespace demo get pods -w` to see what's happening.* - -#### Install non-persistent, non-highly-available databases - -*Please note that this setup is for demonstration purposes; no data is ever written to disk, so a restart will wipe data. Even without restarts expect it to be unstable: you may experience total service unavailability and/or **total data loss after a few hours/days** due to the way kubernetes and cassandra [interact](https://github.com/kubernetes/kubernetes/issues/28969). For more information on this see the production installation section.* - -The following will install (or upgrade) 3 single-pod databases and 3 ClusterIP services to reach them: - -- **databases-ephemeral** - - cassandra-ephemeral - - elasticsearch-ephemeral - - redis-ephemeral - -```shell -helm upgrade --install --namespace demo demo-databases-ephemeral wire/databases-ephemeral --wait -``` - -To delete: `helm delete --purge demo-databases-ephemeral` - -#### Install AWS service mocks - -The code in wire-server still depends on some AWS services for some of its functionality. To ensure wire-server services can correctly start up, install the following "fake" (limited-functionality, non-HA) aws services: - -- **fake-aws** - - fake-aws-sqs - - fake-aws-sns - - fake-aws-s3 - - fake-aws-dynamodb - -```shell -helm upgrade --install --namespace demo demo-fake-aws wire/fake-aws --wait -``` - -To delete: `helm delete --purge demo-fake-aws` - -#### Install a demo SMTP server - -You can either install this very basic SMTP server, or configure your own (see SMTP options in [this section](docs/configuration.md#smtp-server)) - -```shell -helm upgrade --install --namespace demo demo-smtp wire/demo-smtp --wait -``` - -#### Install wire-server - -- **wire-server** - - cassandra-migrations - - elasticsearch-index - - galley - - gundeck - - brig - - cannon - - nginz - - proxy (optional, disabled by default) - - spar (optional, disabled by default) - - webapp (optional, enabled by default) - - team-settings (optional, disabled by default - requires access to a private repository) - - account-pages (optional, disabled by default - requires access to a private repository) - -Start by copying the necessary `values` and `secrets` configuration files: - -``` -cp values/wire-server/demo-values.example.yaml values/wire-server/demo-values.yaml -cp values/wire-server/demo-secrets.example.yaml values/wire-server/demo-secrets.yaml -``` - -In `values/wire-server/demo-values.yaml` (referred to as `values-file` below) and `values/wire-server/demo-secrets.yaml` (referred to as `secrets-file`), the following has to be adapted: - -* turn server shared key (needed for audio/video calling) - * Generate with e.g. `openssl rand -base64 64 | env LC_CTYPE=C tr -dc a-zA-Z0-9 | head -c 42` or similar - * Add key to secrets-file under `brig.secrets.turn.secret` - * (this will eventually need to be shared with a turn server, not part of this demo yet) -* zauth private/public keys (For authentication; `access tokens` and `user tokens` (cookies) are signed and validated with these) - * Generate from within [wire-server](https://github.com/wireapp/wire-server) with `./dist/zauth -m gen-keypair -i 1` if you have everything compiled; or alternatively with docker using `docker run --rm quay.io/wire/alpine-intermediate /dist/zauth -m gen-keypair -i 1` - * add both to secrets-file under `brig.zauth` and the public one to secrets-file under `nginz.secrets.zAuth.publicKeys` -* domain names and urls - * in your values-file, replace `example.com` and other domains and subdomains with domains of your choosing. Look for the `# change this` comments. You can try using `sed -i 's/example.com//g' `. - -Try linting your chart, are any configuration values missing? - -```sh -helm lint -f values/wire-server/demo-values.yaml -f values/wire-server/demo-secrets.yaml wire/wire-server -``` - -If you're confident in your configuration, try installing it: - -```sh -helm upgrade --install --namespace demo demo-wire-server wire/wire-server \ - -f values/wire-server/demo-values.yaml \ - -f values/wire-server/demo-secrets.yaml \ - --wait -``` - -If pods fail to come up the `helm upgrade` may fail or hang; you may wish to run `kubectl get pods -n demo -w` to -see which pods are failing to initialize. Describing the pods may provide information as to why they're failing to -initialize. - -If installation fails you may need to delete the release `helm delete --purge demo-wire-server` and try again. - -#### Adding a load balancer, DNS, and SSL termination - -* If you're on bare metal or on a cloud provider without external load balancer support, see [configuring a load balancer on bare metal servers](docs/configuration.md#load-balancer-on-bare-metal-servers) -* If you're on AWS or another cloud provider, see [configuring a load balancer on cloud provider](docs/configuration.md#load-balancer-on-cloud-provider) - -#### Beyond the demo - -For further configuration options (some have specific requirements about your environment), see [docs/configuration.md](docs/configuration.md). - -### Support with a production on-premise (self-hosted) installation - -[Get in touch](https://wire.com/pricing/). - -## Monitoring - -See the [monitoring guide](./docs/monitoring.md) - -## Troubleshooting - -There are multiple artifacts which combine to form a running wire-server deployment; these include: -- docker images for each service -- kubernetes configs for each deployment (from helm charts) -- configuration maps for each deployment (from helm charts) - - -If you wish to get some information regarding the code currently running on your cluster you can run the following: - +```bash +./bin/update.sh ./charts/ # this will clean and re-package subcharts +helm install charts/ # specify a local file path ``` -./bin/deployment-info.sh -``` - -Example run: - -``` -./deployment-info.sh demo brig -docker_image: quay.io/wire/brig:2.50.319 -chart_version: wire-server-0.24.9 -wire_server_commit: 8ec8b7ce2e5a184233aa9361efa86351c109c134 -wire_server_link: https://github.com/wireapp/wire-server/releases/tag/image/2.50.319 -wire_server_deploy_commit: 01e0f261ca8163e63860f8b2af6d4ae329a32c14 -wire_server_deploy_link: https://github.com/wireapp/wire-server-deploy/releases/tag/chart/wire-server-0.24.9 -``` - -Note you'll need `kubectl`, `git` and `helm` installed +### ./bin/sync.sh -It will output the running docker image; the corresponding wire-server commit hash (and link) and the wire-server helm -chart version which is running. +This script is used to mirror the contents of this github repository with S3 to make it easier for us and external people to use helm charts. Usually CI will make use of this automatically on merge to master/develop, but you can also run that manually after bumping versions. diff --git a/ansible/README.md b/ansible/README.md index 6cc68a5e2..0dde82f54 100644 --- a/ansible/README.md +++ b/ansible/README.md @@ -1,302 +1,5 @@ -# ansible-based configuration +# Ansible-based deployment -In a production environment, some parts of the wire-server infrastructure (such as e.g. cassandra databases) are best configured outside kubernetes. Additionally, kubernetes can be rapidly set up with kubespray, via ansible. -The documentation and code under this folder is meant to help with that. +In a production environment, some parts of the wire-server infrastructure (such as e.g. cassandra databases) are best configured outside kubernetes. Additionally, kubernetes can be rapidly set up with a project called kubespray, via ansible. - - -* [Status](#status) -* [Assumptions](#assumptions) -* [Dependencies](#dependencies) -* [Provision virtual machines](#provision-virtual-machines) -* [Configuring virtual machines](#configuring-virtual-machines) - * [All VMs](#all-vms) - * [WARNING: host re-use](#warning-host-re-use) - * [Authentication](#authentication) - * [ansible pre-kubernetes](#ansible-pre-kubernetes) - * [Installing kubernetes](#installing-kubernetes) - * [Cassandra](#cassandra) - * [ElasticSearch](#elasticsearch) - * [Minio](#minio) - * [Restund](#restund) - * [Installing helm charts - prerequisites](#installing-helm-charts---prerequisites) - * [tinc](#tinc) - - - -## Status - -work-in-progress - -- [ ] document networking setup -- [ ] diagram -- [ ] other assumptions? -- [x] install kubernetes with kubespray -- [x] install cassandra -- [x] install elasticsearch -- [x] install minio -- [ ] install redis -- [x] install restund servers -- [ ] polish - -## Assumptions - -This document assumes - -* a bare-metal setup (no cloud provider) -* a production SLA where 30 minutes of downtime is unacceptable -* about 1000 active users -* all machines run ubuntu 16.04 or ubuntu 18.04 - -## Dependencies - -### Poetry -First, we're going to install [Poetry](https://poetry.eustace.io/). We'll be using it to run ansible playbooks later. -These directions assume you're using python 2.7 (if you only have python3 available, you may need to find some workarounds): - -To install poetry: -``` -sudo apt install -y python2.7 python-pip -curl -sSL https://raw.githubusercontent.com/sdispater/poetry/master/get-poetry.py > get-poetry.py -python2.7 get-poetry.py -source $HOME/.poetry/env -ln -s /usr/bin/python2.7 $HOME/.poetry/bin/python -``` -During the installation, answer 'Y' to allow the Path variable for this user to be modified. - - -### Ansible - -* Install the python dependencies to run ansible. -``` -git clone https://github.com/wireapp/wire-server-deploy.git -cd wire-server-deploy/ansible -## (optional) if you need ca certificates other than the default ones: -# export CURL_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt -poetry install -``` - -Note: the 'make download-cli-binaries' part of 'make download' requires either that you have run this all as root, or that the user you are running these scripts can 'sudo' without being prompted for a password. I run 'sudo ls', get prompted for a password, THEN run 'make download'. -* Download the ansible roles necessary to install databases and kubernetes: -``` -make download -``` - -## Provisioning machines - -Create the following: - -| Name | Amount | CPU | memory | disk | -| ---- | -- | -- | -- | --- | -| cassandra | 3 | 2 | 4 GB | 80 GB | -| minio | 3 | 1 | 2 GB | 100 GB | -| elasticsearch | 3 | 1 | 2 GB | 10 GB | -| redis | 3 | 1 | 2 GB | 10 GB | -| kubernetes | 3 | 4 | 8 GB | 20 GB | -| turn | 2 | 1 | 2 GB | 10 GB | - -It's up to you how you create these machines - kvm on a bare metal machine, VM on a cloud provider, a real physical machine, etc. Make sure they run ubuntu 16.04/18.04. - -Ensure that the machines have IP addresses that do not change. - -## Preparing to run ansible - -### Adding IPs to hosts.ini -Copy the example hosts file: - -`cp hosts.example.ini hosts.ini` - -* Edit the hosts.ini, setting the permanent IPs of the hosts you are setting up wire on. - * replace the `ansible_host` values (`X.X.X.X`) with the IPs that you can reach by SSH. these are the 'internal' addresses of the machines, not what a client will be connecting to. - * replace the `ip` values (`Y.Y.Y.Y`) with the IPs which you wish kubernetes to provide services to clients on. - -There are more settings in this file that we will set in later steps. - -#### WARNING: host re-use - -Some of these playbooks mess with the hostnames of their targets. You MUST pick different hosts for playbooks that rename the host. If you e.g. attempt to run Cassandra and k8s on the same 3 machines, the hostnames will be overwritten by the second installation playbook, breaking the first. - -At the least, we know that the cassandra and kubernetes playbooks are both guilty of hostname manipulation. - -#### Authentication - -##### Password authentication -* if you want to use passwords both for ansible authenticating to a machine, and for ansible to gain root priveledges: -``` -sudo apt install sshpass -``` - * in hosts.ini, uncomment the 'ansible_user = ...' line, and change '...' to the user you want to login as. - * in hosts.ini, uncomment the 'ansible_ssh_pass = ...' line, and change '...' to the password for the user you are logging in as. - * in hosts.ini, uncomment the 'ansible_become_pass = ...' line, and change the ... to the password you'd enter to sudo. - - -##### Configuring SSH keys -(from https://linoxide.com/how-tos/ssh-login-with-public-key/) -If you want a bit higher security, you can copy SSH keys between the machine you are administrating with, and the machines you are managing with ansible. - -* Create an SSH key. -``` -ssh-keygen -t rsa -``` - -* Install your SSH key on each of the machines you are managing with ansible, so that you can SSH into them without a password: -``` -ssh-copy-id -i ~/.ssh/id_rsa.pub $USERNAME@$IP -``` -Replace `$USERNAME` with the username of the account you set up when you installed the machine. - -##### Sudo without password -Ansible can be configured to use a password for switching from the unpriviledged $USERNAME to the root user. This involves having the password lying about, so has security problems. -If you want ansible to not be prompted for any administrative command (a different security problem!): - -* As root on each of the nodes, add the following line at the end of the /etc/sudoers file: -``` - ALL=(ALL) NOPASSWD:ALL -``` -Replace `` with the username of the account you set up when you installed the machine. - -#### Ansible pre-kubernetes - -Now that you have a working hosts.ini, and you can access the host, run any ansible scripts you need, in order for the nodes to have internet (proxy config, ssl certificates, etc). - -### Installing kubernetes -Kubernetes is installed via ansible. - -* To deploy kubernetes: -``` -poetry run ansible-playbook -i hosts.ini kubernetes.yml -vv -``` - -### Cassandra - -* Set variables in the hosts.ini file under `[cassandra:vars]`. Most defaults should be fine, except maybe for the cluster name and the network interface to use: - -```ini -[cassandra:vars] -## set to True if using AWS -is_aws_environment = False -# cassandra_clustername: default - -[all:vars] -## Set the network interface name for cassandra to bind to if you have more than one network interface -# cassandra_network_interface = eth0 -``` - -(see [defaults/main.yml](https://github.com/wireapp/ansible-cassandra/blob/master/defaults/main.yml) for a full list of variables to change if necessary) - -Install cassandra: - -``` -poetry run ansible-playbook -i hosts.ini cassandra.yml -vv -``` - -### ElasticSearch - -* In your 'hosts.ini' file, in the `[elasticsearch:vars]` section, set 'elasticsearch_network_interface' to the name of the interface you want elasticsearch nodes to talk to each other on. For example: - -```ini -[all:vars] -# default first interface on ubuntu on kvm: -elasticsearch_network_interface=ens3 -``` - -* Use poetry to run ansible, and deploy ElasticSearch: -``` -poetry run ansible-playbook -i hosts.ini elasticsearch.yml -vv -``` - -### Minio - -* In your 'hosts.ini' file, in the `[all:vars]` section, make sure you set the 'minio_network_interface' to the name of the interface you want minio nodes to talk to each other on. The default from the playbook is not going to be correct for your machine. For example: - -```ini -[all:vars] -# Default first interface on ubuntu on kvm: -minio_network_interface=ens3 -``` - -* In your 'hosts.ini' file, in the `[minio:vars]` section, ensure you set minio_access_key and minio_secret key. - -* Use poetry to run ansible, and deploy Minio: -``` -poetry run ansible-playbook -i hosts.ini minio.yml -vv -``` - -### Restund - -Set other variables in the hosts.ini file under `[restund:vars]`. Most defaults should be fine, except for the network interfaces to use: - -* set `ansible_host=X.X.X.X` under the `[all]` section to the IP for SSH access. -* (recommended) set `restund_network_interface = ` under the `[restund:vars]` section to the interface name you wish the process to use. Defaults to the default_ipv4_address, with a fallback to `eth0`. -* (optional) `restund_peer_udp_advertise_addr=Y.Y.Y.Y`: set this to the IP to advertise for other restund servers if different than the ip on the 'restund_network_interface'. If using 'restund_peer_udp_advertise_addr', make sure that UDP (!) traffic from any restund server (including itself) can reach that IP (for `restund <-> restund` communication). This should only be necessary if you're installing restund on a VM that is reachable on a public IP address but the process cannot bind to that public IP address directly (e.g. on AWS VPC VM). If unset, `restund <-> restund` UDP traffic will default to the IP in the `restund_network_interface`. - -```ini -[all] -(...) -restund01 ansible_host=X.X.X.X - -(...) - -[all:vars] -## Set the network interface name for restund to bind to if you have more than one network interface -## If unset, defaults to the ansible_default_ipv4 (if defined) otherwise to eth0 -restund_network_interface = eth0 -``` - -(see [defaults/main.yml](https://github.com/wireapp/ansible-restund/blob/master/defaults/main.yml) for a full list of variables to change if necessary) - -Install restund: - -``` -poetry run ansible-playbook -i hosts.ini restund.yml -vv -``` - -### Installing helm charts - prerequisites - -The `helm_external.yml` playbook can be used to write the IPs of the databases into the `values/cassandra-external/values.yaml` file, and thus make them available for helm and the `...-external` charts (e.g. `cassandra-external`). - -Ensure to define the following in your hosts.ini under `[all:vars]`: - -```ini -[all:vars] -minio_network_interface = ... -cassandra_network_interface = ... -elasticsearch_network_interface = ... -redis_network_interface = ... -``` - -``` -poetry run ansible-playbook -i hosts.ini -vv --diff helm_external.yml -``` - -Now you can install the helm charts. - -### tinc - -Installing [tinc mesh vpn](http://tinc-vpn.org/) is **optional and experimental**. It allows having a private network interface `vpn0` on the target VMs. - -_Note: Ensure to run the tinc.yml playbook first if you use tinc, before other playbooks._ - -* Add a `vpn_ip=Z.Z.Z.Z` item to each entry in the hosts file with a (fresh) IP range if you wish to use tinc. -* Add a group `vpn`: - -```ini -# this is a minimal example -[all] -server1 ansible_host=X.X.X.X vpn_ip=10.10.1.XXX -server1 ansible_host=X.X.X.X vpn_ip=10.10.1.YYY - -[cassandra] -server1 -server2 - -[vpn:children] -cassandra -# add other server groups here as necessary -``` - -Configure the physical network interface inside tinc.yml if it is not `eth0`. Then: - -``` -poetry run ansible-playbook -i hosts.ini tinc.yml -vv -``` +This directory hosts a range of ansible playbooks to install kubernetes and databases necessary for wire-server. For documentation on usage, please refer to the [Administrator's Guide](https://docs.wire.com), notably the production installation. diff --git a/ansible/minio.yml b/ansible/minio.yml index c92fd1c5b..d64b33908 100644 --- a/ansible/minio.yml +++ b/ansible/minio.yml @@ -60,9 +60,9 @@ tags: mc-config - name: "make the 'public' bucket world-accessible" - shell: "mc policy set public local/public" + shell: "mc policy public local/public" run_once: true - tags: bucket-create + tags: mc-config - name: "remove unneeded config aliases added by default" shell: "mc config host rm {{ item }}" diff --git a/ansible/roles/minio-static-files/defaults/main.yml b/ansible/roles/minio-static-files/defaults/main.yml index d0445e2d9..27e69d60b 100644 --- a/ansible/roles/minio-static-files/defaults/main.yml +++ b/ansible/roles/minio-static-files/defaults/main.yml @@ -6,7 +6,7 @@ #domain: example.com #deeplink_title: Example Environment -assetsURL: "https://{{ prefix }}s3.{{ domain }}" +assetsURL: "https://{{ prefix }}assets.{{ domain }}" deeplink_config_json: "{{ assetsURL }}/public/deeplink.json" backendURL: "https://{{ prefix }}https.{{ domain }}" backendWSURL: "https://{{ prefix }}ssl.{{ domain }}" diff --git a/bin/offline/online-download-images.sh b/bin/offline/online-download-images.sh index 58dc11b47..28641cd65 100755 --- a/bin/offline/online-download-images.sh +++ b/bin/offline/online-download-images.sh @@ -24,6 +24,7 @@ BACKEND_VERSION=2.60.0 WEBAPP_VERSION="42720-0.1.0-64e6cb-v0.22.0-production" TEAM_VERSION="10562-2.8.0-9e1e59-v0.22.1-production" ACCOUNT_VERSION="242-2.0.1-c4282e-v0.20.4-production" +BACKOFFICE_FRONTEND_VERSION=1.0.1 ########################################################### # You should not need to change the code below @@ -71,6 +72,8 @@ for image in "${images[@]}"; do download "$image" "$BACKEND_VERSION" done +download backoffice-frontend "$BACKOFFICE_FRONTEND_VERSION" + download webapp "$WEBAPP_VERSION" download account "$ACCOUNT_VERSION" diff --git a/charts/account-pages/values.yaml b/charts/account-pages/values.yaml index cecc96a10..b8c9e5abf 100644 --- a/charts/account-pages/values.yaml +++ b/charts/account-pages/values.yaml @@ -9,7 +9,7 @@ resources: cpu: "1" image: repository: quay.io/wire/account - tag: 242-2.0.1-c4282e-v0.20.4-production + tag: 2124-2.0.2-df000b-v0.24.26-production service: https: externalPort: 443 @@ -20,9 +20,9 @@ service: #config: # externalUrls: -# backendRest: -# backendDomain: -# appHost: +# backendRest: nginz-https.example.com +# backendWebsocket: nginz-ssl.example.com +# appHost: account.example.com # # Some relevant environment options, have a look at # https://github.com/wireapp/wire-account/wiki/Self-hosting @@ -32,4 +32,18 @@ envVars: {} # E.g. # envVars: # FEATURE_ENABLE_DEBUG: "true" -# +# You are likely to need at least following CSP headers +# due to the fact that you are likely to do cross sub-domain requests +# i.e., from account.example.com to nginz-https.example.com +# CSP_EXTRA_CONNECT_SRC: "https://*.example.com, wss://*.example.com" +# CSP_EXTRA_IMG_SRC: "https://*.example.com" +# CSP_EXTRA_SCRIPT_SRC: "https://*.example.com" +# CSP_EXTRA_DEFAULT_SRC: "https://*.example.com" +# CSP_EXTRA_FONT_SRC: "https://*.example.com" +# CSP_EXTRA_FRAME_SRC: "https://*.example.com" +# CSP_EXTRA_MANIFEST_SRC: "https://*.example.com" +# CSP_EXTRA_OBJECT_SRC: "https://*.example.com" +# CSP_EXTRA_MEDIA_SRC: "https://*.example.com" +# CSP_EXTRA_PREFETCH_SRC: "https://*.example.com" +# CSP_EXTRA_STYLE_SRC: "https://*.example.com" +# CSP_EXTRA_WORKER_SRC: "https://*.example.com" diff --git a/charts/aws-ingress/templates/ELB_account_pages_https.yaml b/charts/aws-ingress/templates/ELB_account_pages_https.yaml new file mode 100644 index 000000000..02ef36050 --- /dev/null +++ b/charts/aws-ingress/templates/ELB_account_pages_https.yaml @@ -0,0 +1,24 @@ +{{- if .Values.ingress.accountPages.enabled }} +kind: Service +apiVersion: v1 +metadata: + name: account-pages-elb-https + annotations: + # annotations are documented under https://kubernetes.io/docs/concepts/services-networking/service/ + service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "{{ .Values.ingress.accountPages.https.externalPort }}" + service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http" + service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "{{ .Values.ingress.accountPages.https.sslCert }}" + service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "{{ .Values.ingress.accountPages.https.sslPolicy }}" + external-dns.alpha.kubernetes.io/hostname: "{{ .Values.ingress.accountPages.https.hostname }}" + external-dns.alpha.kubernetes.io/ttl: "{{ .Values.ingress.accountPages.https.ttl }}" +spec: + type: LoadBalancer + selector: + wireService: account-pages + ports: + - name: https + protocol: TCP + port: {{ .Values.ingress.accountPages.https.externalPort }} + # NOTE: This value should match team settings http listening port + targetPort: {{ .Values.ingress.accountPages.http.accountPagesPort }} +{{- end }} diff --git a/charts/aws-ingress/templates/ELB_team_settings_https.yaml b/charts/aws-ingress/templates/ELB_team_settings_https.yaml new file mode 100644 index 000000000..3476bad0f --- /dev/null +++ b/charts/aws-ingress/templates/ELB_team_settings_https.yaml @@ -0,0 +1,24 @@ +{{- if .Values.ingress.teamSettings.enabled }} +kind: Service +apiVersion: v1 +metadata: + name: team-settings-elb-https + annotations: + # annotations are documented under https://kubernetes.io/docs/concepts/services-networking/service/ + service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "{{ .Values.ingress.teamSettings.https.externalPort }}" + service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http" + service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "{{ .Values.ingress.teamSettings.https.sslCert }}" + service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "{{ .Values.ingress.teamSettings.https.sslPolicy }}" + external-dns.alpha.kubernetes.io/hostname: "{{ .Values.ingress.teamSettings.https.hostname }}" + external-dns.alpha.kubernetes.io/ttl: "{{ .Values.ingress.teamSettings.https.ttl }}" +spec: + type: LoadBalancer + selector: + wireService: team-settings + ports: + - name: https + protocol: TCP + port: {{ .Values.ingress.teamSettings.https.externalPort }} + # NOTE: This value should match team settings http listening port + targetPort: {{ .Values.ingress.teamSettings.http.teamSettingsPort }} +{{- end }} diff --git a/charts/aws-ingress/values.yaml b/charts/aws-ingress/values.yaml index 2c44e3f03..3373fc6cc 100644 --- a/charts/aws-ingress/values.yaml +++ b/charts/aws-ingress/values.yaml @@ -4,13 +4,14 @@ # corresponding certificates uploaded, see # https://aws.amazon.com/premiumsupport/knowledge-center/import-ssl-certificate-to-iam/ # + ingress: webapp: https: externalPort: 443 sslCert: arn:aws:iam::00000-accountnumber-00000:server-certificate/example.com sslPolicy: ELBSecurityPolicy-TLS-1-2-2017-01 - hostname: -webapp-https. + hostname: webapp.example.com ttl: 300 http: webappPort: 8080 @@ -19,7 +20,7 @@ ingress: externalPort: 443 sslCert: arn:aws:iam::00000-accountnumber-00000:server-certificate/example.com sslPolicy: ELBSecurityPolicy-TLS-1-2-2017-01 - hostname: -nginz-https. + hostname: nginz-https.example.com ttl: 300 http: httpPort: 8080 @@ -27,7 +28,7 @@ ingress: externalPort: 443 sslCert: arn:aws:iam::00000-accountnumber-00000:server-certificate/example.com sslPolicy: ELBSecurityPolicy-TLS-1-2-2017-01 - hostname: -webapp-https. + hostname: nginz-ssl.example.com ttl: 300 ws: wsPort: 8081 @@ -37,10 +38,28 @@ ingress: externalPort: 443 sslCert: arn:aws:iam::00000-accountnumber-00000:server-certificate/example.com sslPolicy: ELBSecurityPolicy-TLS-1-2-2017-01 - hostname: -s3-https. + hostname: assets.example.com ttl: 300 http: s3Port: 9000 selector: key: app value: minio # (currently) fake-aws-s3 chart uses 'minio', minio-external chart uses 'minio-external' + teamSettings: + https: + externalPort: 443 + sslCert: arn:aws:iam::00000-accountnumber-00000:server-certificate/example.com + sslPolicy: ELBSecurityPolicy-TLS-1-2-2017-01 + hostname: teams.example.com + ttl: 300 + http: + teamSettingsPort: 8080 + accountPages: + https: + externalPort: 443 + sslCert: arn:aws:iam::00000-accountnumber-00000:server-certificate/example.com + sslPolicy: ELBSecurityPolicy-TLS-1-2-2017-01 + hostname: account.example.com + ttl: 300 + http: + accountPagesPort: 8080 diff --git a/charts/backoffice/README.md b/charts/backoffice/README.md index 5453e74b3..b91a36112 100644 --- a/charts/backoffice/README.md +++ b/charts/backoffice/README.md @@ -11,3 +11,9 @@ Once the chart is installed, and given default values, you can access the fronte * kubectl port-forward svc/backoffice 8080:8080 * Open your local browser at http://localhost:8080 + +If you don't directly access your cluster from your machine, you can do the following (note the backoffice requires port 8080 to be used, but that port is already used by the API server of kubernetes, so use another port like 9999 as intermediate step): + +* in a terminal from a kubernetes-master node: `kubectl port-forward svc/backoffice 9999:8080` +* from another terminal on your machine: `ssh -L 8080:localhost:9999 -N` +* Access your local browser on http://localhost:8080 diff --git a/charts/brig/README.md b/charts/brig/README.md index db6153ee3..506b0cfee 100644 --- a/charts/brig/README.md +++ b/charts/brig/README.md @@ -4,7 +4,3 @@ Note that brig depends on some provisioned storage, namely: - elasticsearch-directory These are dealt with independently from this chart. - -TODO: - - * TURN setup / calling isn't done diff --git a/charts/brig/templates/configmap.yaml b/charts/brig/templates/configmap.yaml index 3a4f98130..50d1d99d1 100644 --- a/charts/brig/templates/configmap.yaml +++ b/charts/brig/templates/configmap.yaml @@ -176,5 +176,11 @@ data: setMaxTeamSize: {{ .setMaxTeamSize }} setMaxConvSize: {{ .setMaxConvSize }} setEmailVisibility: {{ .setEmailVisibility }} + setPropertyMaxKeyLen: {{ .setPropertyMaxKeyLen }} + setPropertyMaxValueLen: {{ .setPropertyMaxValueLen }} + setDeleteThrottleMillis: {{ .setDeleteThrottleMillis }} + {{- if .setSearchSameTeamOnly }} + setSearchSameTeamOnly: {{ .setSearchSameTeamOnly }} + {{- end }} {{- end }} {{- end }} diff --git a/charts/brig/values.yaml b/charts/brig/values.yaml index 976ec97d2..5ecc2ec96 100644 --- a/charts/brig/values.yaml +++ b/charts/brig/values.yaml @@ -64,6 +64,11 @@ config: setMaxTeamSize: 500 setMaxConvSize: 500 setEmailVisibility: visible_to_self + setPropertyMaxKeyLen: 1024 + setPropertyMaxValueLen: 524288 + setDeleteThrottleMillis: 100 + # Allow search within same team only. Default: false + # setSearchSameTeamOnly: false|true smtp: passwordFile: /etc/wire/brig/secrets/smtp-password.txt turnStatic: diff --git a/charts/gundeck/templates/configmap.yaml b/charts/gundeck/templates/configmap.yaml index 05318e54a..804257472 100644 --- a/charts/gundeck/templates/configmap.yaml +++ b/charts/gundeck/templates/configmap.yaml @@ -41,4 +41,7 @@ data: httpPoolSize: 1024 notificationTTL: 2419200 bulkPush: {{ .bulkPush }} + maxConcurrentNativePushes: + soft: 1000 + # hard: 30 # more than this number of threads will not be allowed {{- end }} diff --git a/charts/nginx-ingress-services/values.yaml b/charts/nginx-ingress-services/values.yaml index 6fb0de1e7..ce135a5af 100644 --- a/charts/nginx-ingress-services/values.yaml +++ b/charts/nginx-ingress-services/values.yaml @@ -1,9 +1,9 @@ # Default values for nginx-ingress-services -# Team settings and account pages are disabled by default since they -# require access to a private repo. +# Team settings is disabled by default since it requires access to a private repo. teamSettings: enabled: false +# Account pages may be useful to enable password reset or email validation done after the initial registration accountPages: enabled: false @@ -39,13 +39,13 @@ service: # You will need to supply some DNS names, namely # config: # dns: -# https: bare-https. -# ssl: bare-ssl. -# webapp: bare-webapp. -# fakeS3: bare-s3. -# teamSettings: bare-team. +# https: nginz-https. +# ssl: nginz-ssl. +# webapp: webapp. +# fakeS3: assets. +# teamSettings: teams. # ^ teamSettings is ignored unless teamSettings.enabled == true -# accountPages: bare-account. +# accountPages: account. # ^ accountPages is ignored unless accountPages.enabled == true # For TLS # secrets: diff --git a/charts/nginz/README.md b/charts/nginz/README.md index b6488eeeb..20a399f40 100644 --- a/charts/nginz/README.md +++ b/charts/nginz/README.md @@ -21,4 +21,4 @@ This only needs to be done when you wish to bypass normal authentication for som ## Sidecar container nginz-disco -Due to nginx not supporting DNS names for its list of upstream servers (unless you pay extra), the [nginz-disco](https://github.com/wireapp/wire-server/tools/nginz-disco) container is a simple bash script to do DNS lookups and write the resulting IPs to a file. Nginz reloads on changes to this file. +Due to nginx not supporting DNS names for its list of upstream servers (unless you pay extra), the [nginz-disco](https://github.com/wireapp/wire-server/tree/develop/tools/nginz_disco) container is a simple bash script to do DNS lookups and write the resulting IPs to a file. Nginz reloads on changes to this file. diff --git a/charts/nginz/templates/conf/_nginx.conf.tpl b/charts/nginz/templates/conf/_nginx.conf.tpl index 64a2239c4..0a1675615 100644 --- a/charts/nginz/templates/conf/_nginx.conf.tpl +++ b/charts/nginz/templates/conf/_nginx.conf.tpl @@ -51,8 +51,12 @@ http { # # Logging # + # Note sanitized_request: + # We allow passing access_token as query parameter for e.g. websockets + # However we do not want to log access tokens. + # - log_format custom_zeta '$remote_addr $remote_user "$time_local" "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $http_x_forwarded_for $connection $request_time $upstream_response_time $upstream_cache_status $zauth_user $zauth_connection $request_id $proxy_protocol_addr'; + log_format custom_zeta '$remote_addr $remote_user "$time_local" "$sanitized_request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $http_x_forwarded_for $connection $request_time $upstream_response_time $upstream_cache_status $zauth_user $zauth_connection $request_id $proxy_protocol_addr'; access_log /dev/stdout custom_zeta; # @@ -222,6 +226,13 @@ http { {{- end }} location {{ $location.path }} { + + # remove access_token from logs, see 'Note sanitized_request' above. + set $sanitized_request $request; + if ($sanitized_request ~ (.*)access_token=[^&]*(.*)) { + set $sanitized_request $1access_token=****$2; + } + {{- if ($location.basic_auth) }} auth_basic "Restricted"; auth_basic_user_file {{ $.Values.nginx_conf.basic_auth_file }}; diff --git a/charts/team-settings/values.yaml b/charts/team-settings/values.yaml index ad443905c..d9b34f847 100644 --- a/charts/team-settings/values.yaml +++ b/charts/team-settings/values.yaml @@ -9,7 +9,7 @@ resources: cpu: "1" image: repository: quay.io/wire/team-settings - tag: 11298-2.9.1-d0536e-v0.23.5-production + tag: 13000-2.10.0-3b12f2-v0.24.33-production service: https: externalPort: 443 @@ -20,10 +20,10 @@ service: #config: # externalUrls: -# backendRest: -# backendWebsocket: -# backendDomain: -# appHost: +# backendRest: nginz-https.example.com +# backendWebsocket: nginz-ssl.example.com +# backendDomain: example.com +# appHost: teams.example.com #secrets: # configJson: @@ -38,4 +38,18 @@ envVars: {} # E.g. # envVars: # FEATURE_ENABLE_DEBUG: "true" -# +# You are likely to need at least following CSP headers +# due to the fact that you are likely to do cross sub-domain requests +# i.e., from teams.example.com to nginz-https.example.com +# CSP_EXTRA_CONNECT_SRC: "https://*.example.com, wss://*.example.com" +# CSP_EXTRA_IMG_SRC: "https://*.example.com" +# CSP_EXTRA_SCRIPT_SRC: "https://*.example.com" +# CSP_EXTRA_DEFAULT_SRC: "https://*.example.com" +# CSP_EXTRA_FONT_SRC: "https://*.example.com" +# CSP_EXTRA_FRAME_SRC: "https://*.example.com" +# CSP_EXTRA_MANIFEST_SRC: "https://*.example.com" +# CSP_EXTRA_OBJECT_SRC: "https://*.example.com" +# CSP_EXTRA_MEDIA_SRC: "https://*.example.com" +# CSP_EXTRA_PREFETCH_SRC: "https://*.example.com" +# CSP_EXTRA_STYLE_SRC: "https://*.example.com" +# CSP_EXTRA_WORKER_SRC: "https://*.example.com" diff --git a/charts/webapp/values.yaml b/charts/webapp/values.yaml index 815db6408..fb9e57037 100644 --- a/charts/webapp/values.yaml +++ b/charts/webapp/values.yaml @@ -9,7 +9,7 @@ resources: cpu: "1" image: repository: quay.io/wire/webapp - tag: 44421-0.1.0-e438dc-v0.24.0-production + tag: 48056-0.1.0-f5e9e8-v0.24.34-production service: https: externalPort: 443 @@ -20,10 +20,10 @@ service: #config: # externalUrls: -# backendRest: -# backendWebsocket: -# backendDomain: -# appHost: +# backendRest: nginz-https.example.com +# backendWebsocket: nginz-ssl.example.com +# backendDomain: example.com +# appHost: webapp.example.com # Some relevant environment options, have a look at # https://github.com/wireapp/wire-webapp/wiki/Self-hosting @@ -33,4 +33,18 @@ envVars: {} # E.g. # envVars: # FEATURE_ENABLE_DEBUG: "true" -# +# You are likely to need at least following CSP headers +# due to the fact that you are likely to do cross sub-domain requests +# i.e., from webapp.example.com to nginz-https.example.com +# CSP_EXTRA_CONNECT_SRC: "https://*.example.com, wss://*.example.com" +# CSP_EXTRA_IMG_SRC: "https://*.example.com" +# CSP_EXTRA_SCRIPT_SRC: "https://*.example.com" +# CSP_EXTRA_DEFAULT_SRC: "https://*.example.com" +# CSP_EXTRA_FONT_SRC: "https://*.example.com" +# CSP_EXTRA_FRAME_SRC: "https://*.example.com" +# CSP_EXTRA_MANIFEST_SRC: "https://*.example.com" +# CSP_EXTRA_OBJECT_SRC: "https://*.example.com" +# CSP_EXTRA_MEDIA_SRC: "https://*.example.com" +# CSP_EXTRA_PREFETCH_SRC: "https://*.example.com" +# CSP_EXTRA_STYLE_SRC: "https://*.example.com" +# CSP_EXTRA_WORKER_SRC: "https://*.example.com" diff --git a/docs/_config.yml b/docs/_config.yml deleted file mode 100644 index c74188174..000000000 --- a/docs/_config.yml +++ /dev/null @@ -1 +0,0 @@ -theme: jekyll-theme-slate \ No newline at end of file diff --git a/docs/administration.md b/docs/administration.md deleted file mode 100644 index 45e5d57c3..000000000 --- a/docs/administration.md +++ /dev/null @@ -1,27 +0,0 @@ -# Administration - -This section shows how to interact with some of the server components directly from within the respective virtual machine. - -For any command below, first ssh into it: - -``` -ssh -``` - -## Restund (TURN) - -### How to see how many people are currently connected to the restund server - -Assuming you installed restund using the ansible playbook from this repo, you can interact with it like this (from a restund VM): - -```sh -echo turnstats | nc -u 127.0.0.1 33000 -q1 | grep allocs_cur | cut -d' ' -f2 -``` - -### How to restart restund - -*Please note that restarting `restund` means any user that is currently connected to it (i.e. having a call) will lose its audio/video connection* - -``` -systemctl restart restund -``` diff --git a/docs/ansible.md b/docs/ansible.md deleted file mode 100644 index f9bdc807c..000000000 --- a/docs/ansible.md +++ /dev/null @@ -1,21 +0,0 @@ -# Ansible - -TODO - -## Troubleshooting - -`ansible all -i inventory.ini -m shell -a "echo hello"` - -If your target machine only has python 3 (not python 2.7), avoid bootstrapping python 2.7 by: - -``` -# inventory.ini - -[all] -server1 ansible_host=1.2.3.4 - -[all:vars] -ansible_python_interpreter=/usr/bin/python3 -``` - -(python 3 may not be supported by all ansible modules yet) diff --git a/docs/architecture.md b/docs/architecture.md deleted file mode 100644 index 0d5f9a123..000000000 --- a/docs/architecture.md +++ /dev/null @@ -1,59 +0,0 @@ -# Architecture - -TODO other components - -## Restund (TURN) servers - -### Introduction - -Restund servers allow two users on different private networks (for example Alice who is in an office connected to an office router and Bob who is at home connected to a home router) to have a Wire audio or video call. More precisely: - -> Restund is a modular and flexible [STUN](https://en.wikipedia.org/wiki/STUN) and [TURN](https://en.wikipedia.org/wiki/Traversal_Using_Relays_around_NAT) Server, with IPv4 and IPv6 support. - -### Architecture - -Since the restund servers help establishing a connection between two users, they need to be reachable by both of these users, which usually means they need to have a **public IP address**. - -While one server is enough to get started, two servers provide high-availability in case one server gets into trouble. - -You can either have restund servers directly exposed to the public internet: - -![architecture-restund](img/architecture-restund.png) - -Or you can have them reachable by fronting them with a firewall or load balancer machine that may have a different IP than the server where restund is installed: - -![architecture-restund-lb](img/architecture-restund-lb.png) - -### Protocols and open ports - -#### UDP - -Restund servers provide the best audio/video connections if end-user devices can connect to them via UDP. In this case, a firewall (if any) needs to allow and/or forward the complete UDP port range `1024-65535` for incoming UDP traffic. Port `3478` is the default control port, however one UDP port per active connection is required, so a whole port range must be available and reachable from the outside. - -In case e.g. office firewall rules disallow UDP traffic, there is a possibility to use TCP instead, at the expense of call quality. - -#### TCP - -Two (configurable) ports are used by restund for TCP, one for plain TCP and one for TLS. By default restund uses ports `3478` for plain TCP and port `5349` for TLS. You can instead use (if that's easier with firewall rules) for example ports `80` and `443` (requires to run restund as root) or do a redirect from a load balancer (if using one) to redirect `443 -> 5349` and `80 -> 3478`. - -### Amount of users and filedescriptors - -Each allocation (active connection by one participant) requires 1 or 2 file descriptors, so ensure you increase your file descriptor limits in case you have many users. - -Currently one restund server can have a maximum of 64000 allocations. If you have more users than that in an active call, you need to deploy more restund servers. - -### Load balancing and high-availability - -Load balancing is not possible, since STUN/TURN is a stateful protocol, so UDP packets addressed to `restund server 1`, if by means of a load balancer were to end up at `restund server 2`, would get dropped, as the second server doesn't know the source address. - -High-availability is nevertheless ensured by having and advertising more than one restund server. - -### Discovery and establishing a call - -A simplified flow of how restund servers, along with the wire-server are used to establish a call: - -![flow-restund](img/flow-restund.png) - -### DNS - -Usually DNS records are used which point to the public IPs of the restund servers (or of the respective firewall or load balancer machines). These DNS names are then used when configuring wire-server. diff --git a/docs/cassandra.md b/docs/cassandra.md deleted file mode 100644 index 4cce0a179..000000000 --- a/docs/cassandra.md +++ /dev/null @@ -1,21 +0,0 @@ -## Interacting with cassandra - -If you installed cassandra with the ansible playbook from this repo, you can interact with it like this (from a cassandra VM): - -See cluster health - -``` -nodetool status -``` - -Inspect tables - -``` -cqlsh -# from the cqlsh shell -describe keyspaces -use ; -describe tables; -``` - -For more information, see the [cassandra documentation](https://cassandra.apache.org/doc/latest/) diff --git a/docs/configuration.md b/docs/configuration.md deleted file mode 100644 index 8f4fa1f3c..000000000 --- a/docs/configuration.md +++ /dev/null @@ -1,226 +0,0 @@ -# Configuration - -This contains instructions towards a more production-ready setup. Depending on your use-case and requirements, you may only need to configure a subset of the following sections. - - - -* [Additional requirements recommended for a production setup](#additional-requirements-recommended-for-a-production-setup) -* [Prelude: Overriding configuration settings](#prelude-overriding-configuration-settings) -* [Configuring](#configuring) - * [SMTP server](#smtp-server) - * [Load balancer on bare metal servers](#load-balancer-on-bare-metal-servers) - * [Load Balancer on cloud-provider](#load-balancer-on-cloud-provider) - * [Real AWS services](#real-aws-services) - * [Persistence and high-availability](#persistence-and-high-availability) - * [Security](#security) - * [Sign up with a phone number (Sending SMS)](#sign-up-with-a-phone-number-sending-sms) - * [3rd-party proxying](#3rd-party-proxying) - * [TURN servers (Audio/Video calls)](#turn-servers-audiovideo-calls) - * [Metrics/logging](#metricslogging) - - - -# Additional requirements recommended for a production setup - -* more server resources to ensure [high-availability](#persistence-and-high-availability) -* an email/SMTP server to send out registration emails -* depending on your required functionality, you may or may not need an [**AWS account**](https://aws.amazon.com/). See details about limitations without an AWS account in the following sections. -* one or more people able to maintain the installation -* official support by Wire ([contact us](https://wire.com/pricing/)) - -# Prelude: Overriding configuration settings - -In case you're unfamiliar with the [helm documentation](https://docs.helm.sh/) - -1. Default values are under a specific chart's `values.yaml` file, e.g. `charts/brig/values.yaml` -2. If a chart uses sub charts, there can be overrides in the parent chart's `values.yaml` file, if namespaced to the sub chart. Example: if chart `parent` includes chart `child`, and `child`'s `values.yaml` has a default value `foo: bar`, and the `parent` chart's `values.yaml` has a value - ``` - child: - foo: baz - ``` - then the value that will be used is `baz`. -3. Values passed to helm via `-f ` override the above. Note that if you `helm install parent` but wish to override values for `child`, the same logic as in `2.` applies. If `-f ` is used multiple times, the last file wins in case keys exist multiple times (there is no merge performed). - -# Configuring - -## SMTP server - -**Assumptions**: none - -**Provides**: - -* full control over email sending - -**You need**: - -* SMTP credentials (to allow for email sending; prerequisite for registering users and running the smoketest) - -**How to configure**: - -* *if using a gmail account, ensure to enable ['less secure apps'](https://support.google.com/accounts/answer/6010255?hl=en)* -* Add user, SMTP server, connection type to `values/wire-server`'s values file under `brig.config.smtp` -* Add password in `secrets/wire-server`'s secrets file under `brig.secrets.smtpPassword` - -## Load balancer on bare metal servers - -**Assumptions**: - -* You installed kubernetes on bare metal servers or virtual machines that can bind to a public IP address. -* **If you are using AWS or another cloud provider, see [Creating a cloudprovider-based load balancer](#load-balancer-on-cloud-provider) instead** - -**Provides**: - -* Allows using a provided Load balancer for incoming traffic -* SSL termination is done on the ingress controller -* You can access your wire-server backend with given DNS names, over SSL and from anywhere in the internet - -**You need**: - -* A kubernetes node with a _public_ IP address (or internal, if you do not plan to expose the Wire backend over the Internet but we will assume you are using a public IP address) -* DNS records for the different exposed addresses (the ingress depends on the usage of virtual hosts), namely: - * bare-https.your-domain - * bare-ssl.your-domain - * bare-s3.your-domain - * bare-webapp.your-domain - * bare-team.your-domain (optional) -* A wildcard certificate for the different hosts (*.your-domain) - we assume you want to do SSL termination on the ingress controller - -**Caveats**: - -* Note that there can be only a _single_ load balancer, otherwise your cluster might become [unstable](https://metallb.universe.tf/installation/) - -**How to configure**: - -``` -cp values/metallb/demo-values.example.yaml values/metallb/demo-values.yaml -cp values/nginx-lb-ingress/demo-values.example.yaml values/nginx-lb-ingress/demo-values.yaml -cp values/nginx-lb-ingress/demo-secrets.example.yaml values/nginx-lb-ingress/demo-secrets.yaml -``` - -* Adapt `values/metallb/demo-values.yaml` to provide a list of public IP address CIDRs that your kubernetes nodes can bind to. -* Adapt `values/nginx-lb-ingress/demo-values.yaml` with correct URLs -* Put your TLS cert and key into `values/nginx-lb-ingress/demo-secrets.yaml`. - -Install `metallb` (for more information see the [docs](https://metallb.universe.tf)): - -```sh -helm upgrade --install --namespace metallb-system metallb wire/metallb \ - -f values/metallb/demo-values.yaml \ - --wait --timeout 1800 -``` - -Install `nginx-lb-ingress`: - -``` -helm upgrade --install --namespace demo demo-nginx-lb-ingress wire/nginx-lb-ingress \ - -f values/nginx-lb-ingress/demo-values.yaml \ - -f values/nginx-lb-ingress/demo-secrets.yaml \ - --wait -``` - -Now, create DNS records for the URLs configured above. - -## Load Balancer on cloud-provider - -### AWS - -[Upload the required certificates](https://aws.amazon.com/premiumsupport/knowledge-center/import-ssl-certificate-to-iam/). Create and configure `values/aws-ingress/demo-values.yaml` from the examples. - -``` -helm upgrade --install --namespace demo demo-aws-ingress wire/aws-ingress \ - -f values/aws-ingress/demo-values.yaml \ - --wait -``` - -To give your load balancers public DNS names, create and edit `values/external-dns/demo-values.yaml`, then run [external-dns](https://github.com/helm/charts/tree/master/stable/external-dns): - -``` -helm repo update -helm upgrade --install --namespace demo demo-external-dns stable/external-dns \ - --version 1.7.3 \ - -f values/external-dns/demo-values.yaml \ - --wait -``` - -Things to note about external-dns: - -- There can only be a single external-dns chart installed (one per kubernetes cluster, not one per namespace). So if you already have one running for another namespace you probably don't need to do anything. -- You have to add the appropriate IAM permissions to your cluster (see the [README](https://github.com/helm/charts/tree/master/stable/external-dns)). -- Alternatively, use the AWS route53 console. - -### Other cloud providers - -This information is not yet available. If you'd like to contribute by adding this information for your cloud provider, feel free to read the [contributing guidelines](../CONTRIBUTING.md) and open a PR. - -## Real AWS services - -**Assumptions**: - -* You installed kubernetes and wire-server on AWS - -**Provides**: - -* Better availability guarantees and possibly better functionality of AWS services such as SQS and dynamoDB. -* You can use ELBs in front of nginz for higher availability. -* instead of using a smtp server and connect with SMTP, you may use SES. See configuration of brig and the `useSES` toggle. - -**You need**: - -* An AWS account - -**How to configure**: - -* Instead of using fake-aws charts, you need to set up the respective services in your account, create queues, tables etc. Have a look at the fake-aws-* charts; you'll need to replicate a similar setup. - * Once real AWS resources are created, adapt the configuration in the values and secrets files for wire-server to use real endpoints and real AWS keys. Look for comments including `if using real AWS`. -* Creating AWS resources in a way that is easy to create and delete could be done using either [terraform](https://www.terraform.io/) or [pulumi](https://pulumi.io/). If you'd like to contribute by creating such automation, feel free to read the [contributing guidelines](../CONTRIBUTING.md) and open a PR. - -## Persistence and high-availability - -Currently, due to the way kubernetes and cassandra [interact](https://github.com/kubernetes/kubernetes/issues/28969), cassandra cannot reliably be installed on kubernetes. Some people have tried, e.g. [this project](https://github.com/instaclustr/cassandra-operator) though at the time of writing (Nov 2018), this does not yet work as advertised. We recommend therefore to install cassandra, (possibly also elasticsearch and redis) separately, i.e. outside of kubernetes (using 3 nodes each). - -For further higher-availability: - -* scale your kubernetes cluster to have separate etcd and master nodes (3 nodes each) -* use 3 instead of 1 replica of each wire-server chart - -## Security - -For a production deployment, you should, as a minimum: - -* Ensure traffic between kubernetes nodes, etcd and databases are confined to a private network -* Ensure kubernetes API is unreachable from the public internet (e.g. put behind VPN/bastion host or restrict IP range) to prevent [kubernetes vulnerabilities](https://www.cvedetails.com/vulnerability-list/vendor_id-15867/product_id-34016/Kubernetes-Kubernetes.html) from affecting you -* Ensure your operating systems get security updates automatically -* Restrict ssh access / harden sshd configuration -* Ensure no other pods with public access than the main ingress are deployed on your cluster, since, in the current setup, pods have access to etcd values (and thus any secrets stored there, including secrets from other pods) -* Ensure developers encrypt any secrets.yaml files - -Additionally, you may wish to build, sign, and host your own docker images to have increased confidence in those images. We haved "signed container images" on our roadmap. - -## Sign up with a phone number (Sending SMS) - -**Provides**: - -* Registering accounts with a phone number - -**You need**: - -* a [Nexmo](https://www.nexmo.com/) account -* a [Twilio](https://www.twilio.com/) account - -**How to configure**: - -See the `brig` chart for configuration. - -## 3rd-party proxying - -You need Giphy/Google/Spotify/Soundcloud API keys (if you want to support previews by proxying these services) - -See the `proxy` chart for configuration. - -## TURN servers (Audio/Video calls) - -Not yet supported. - -## Metrics/logging - -Not yet supported diff --git a/docs/elasticsearch.md b/docs/elasticsearch.md deleted file mode 100644 index e91b3813d..000000000 --- a/docs/elasticsearch.md +++ /dev/null @@ -1,11 +0,0 @@ -## Interacting with elasticsearch - -If you installed elasticsearch with the ansible playbook from this repo, you can interact with it like this (from an elasticsearch VM): - -See cluster health - -``` -curl 'http://localhost:9200/_cat/nodes?v&h=id,ip,name' -``` - -For more information, see the [elasticsearch documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html) diff --git a/docs/img/architecture-restund-lb.drawio b/docs/img/architecture-restund-lb.drawio deleted file mode 100644 index 607552853..000000000 --- a/docs/img/architecture-restund-lb.drawio +++ /dev/null @@ -1 +0,0 @@ -7Vtbc+I2FP41zLQP8djyBfIYCLvdaZphcml3n3YEFkaNsFhZEOivr2RLxrIhIcROyMRMJrGO5GNdvvPpO3LouIP5+iuDi9lfNESkA+xw3XEvOwA4IADij7RsMkvgeZkhYjhUjbaGW/wfUkZbWZc4RInRkFNKOF6YxgmNYzThhg0yRh/NZlNKzKcuYIQqhtsJJFXrPzjks8zaA92t/Q+Eo5l+shOcZzVzqBurkSQzGNLHgskddtwBo5RnV/P1ABE5eXpesvu+7KnNO8ZQzA+54dr59ss/c9kQ89WfVz+/fgn9hzNHjWMFyVKNWPWWb/QUoFDMiCpSxmc0ojEkw621z+gyDpF8ji1K2zZXlC6E0RHGfxHnG7W8cMmpMM34nKjaKY25qnQCUc76IB+8d7AaDJBFiD81QpDPtQAponPE2UbcyBCBHK/MB0CFlihvl986olg8GtgK2X7Xz25RuHbObdNFQpdsgtRdxWUpOwpMR8ArOcpGWHF0wRjcFJotZIPk8A4D136yX17Pfqq9uMh6oEuFyd2aUgi+AI6ggsab4e3d/bW4sn+7u7+5/r0DAiKWuz9mBkiDX0sZRymQzpIUSReigRMs1imcdL24itTf1E2IV6/yczu8+Xt4I1toj2LcqdN6n1PzsMudLEW8dDeghLK06Np2EAwGouEKMY4FN14QHMWijsvwzq1XcIzIiCaYYyprx5RzOhcNSKliIoIYCd99qPzkhgIl0CUnOEaDnNcltUwxIYWODdKPsCec0QdUqAl6PafvipqIwRCj7XBiGqOC+RIz4T3rViyJK3em2R5Iywwu5NTM15Hc5awY8UfKHhIrQWyVdnw3gcmZQXor3ENhVWbSN5SIweupnfNxuw9p0pkVtyC7RCBFMiuE78uj0/2w0Qna+Pl08eN59onFj3+A1orDC6la5aISmCR4kk4oZLxqLiw2WmP+vXD9Q6635avS5Votf1rYFAojxLAYm1yCDB9H67BM7jy/tT+r19zdi1pYNH/HomnbK1WdF5Qw4x6r6hzTkds7TNXVJaSCHVCrTYnAuaSteJws0rLdMLnW0+/7y5GopjHZvP0ISkEuOI2b8cuQ8A7HaQMZh0rMi9Z+v+Nf7qL68l4yx2GYJmPpbtGHk4coTcsKpD5NPy9hW5VVq57lKaZBAPupbi812xYApXTEfV386iZ0Ok1QIxHlnB/A3m2mvINTS1TolKnwUE4tO8rRWHOmXHnOKWbKemMqwFExnKSwu8HoM7JOFqNP0I5v+46xVlqWnC7v6M68M++IiWeb71tZKYs/tJKUha3ITEubYqkumVk7NwH73Axd50huKjty7W4j3FTpsP0Wp3K7tr5aVVnOWa0Mq5UQn0mRbcu2e6YOOzt9IVbd+QaQECQx41QWuT2KOfAoZk7HOAVuE0eZ56ZC0ln2M0l9cycxoHqU+Yx6+tjSvhllD8qHHN1jd09gOqocuzR8WuJUNdYVhaGwjCGB8STllhQaUxFkj1B6rGmnaunqOLoiYn1+5qvT0AuYEry1pivQVk5RRd4qo7c22nJAi9MWp5UXHW4Zp713xmlz+UKu9MCBIr8F6oFAXUyaQafrmwlGr6r9doKzrCXqI9Gq9stR5bb4OcX8wfdPLX/wTjB/OIkjuuPf8L72NUPpIOPoZMTreVbpP2+63pumI+6uI9969s/RckzwRLT4JtH6IV7cIiuyZL0FLNfyajvbk/GTKIDv4t+9B3vVQ8EiFStSNVlYGc0Ib4AnD0hYem+pA91qvtICOQOy3wL5qQ3fAyaSdeb7bkiuisYRwyvIUc0Q1CCxLfEjf7WIyPZgk9p8/e8Y75bjulUJ+DaIqL7j+JyI8L2uCQnQHCREcftFnUzmbb/u5A7/Bw== \ No newline at end of file diff --git a/docs/img/architecture-restund-lb.png b/docs/img/architecture-restund-lb.png deleted file mode 100644 index ad5994323..000000000 Binary files a/docs/img/architecture-restund-lb.png and /dev/null differ diff --git a/docs/img/architecture-restund.drawio b/docs/img/architecture-restund.drawio deleted file mode 100644 index 5361aea8e..000000000 --- a/docs/img/architecture-restund.drawio +++ /dev/null @@ -1 +0,0 @@ -7Vptc5s4EP41nrn7EAaQIeRj/NJe53IZT17u2k8d2chYF4FcITv2/fqTQIAEdmMn0LoT50vY1bKspGcf7WrcA8N485HB5eIvGiLSc+1w0wOjnus6ru+Kf1KzzTV+v58rIoZDZVQp7vF/SCltpV3hEKWGIaeUcLw0lTOaJGjGDR1kjD6bZnNKzK8uYYQaivsZJE3tPzjki1wbuJeV/g+Eo0XxZce/ykdiWBirmaQLGNJnTQXGPTBklPL8Kd4MEZGLV6xL/t6HPaNlYAwl/JAXbp1P37wLwMaYr/+8+frxQ+g9XThqHmtIVmrGKlq+LZYAhWJFlEgZX9CIJpCMK+2A0VUSIvkdW0iVzQ2lS6F0hPJfxPlWbS9ccSpUCx4TNTqnCVeDji/kPAb54b2TdcolFNhDNEacbYUJQwRyvDbfgwoEUWlXvjqhWHh0bQVY79LLX1Fwda5s00VKV2yG1Fv6atcc9QPbcATswHTEIYsQbzi6ZgxuNbOlNEgPD1h86Ki4avbiIY+gkLTFrVQZso5AmdsA2d34/uHxVjzZvz083t3+3nN9IjZ2MGUG9vxvK5keGT4u0gwg18LA8ZebDCXFuHiK1P/MTYjXb/JzP777e3wnLQqPYt6Z03a/0/K060HWElm6G1JCWSYC2/b94VAYrhHjWFDeNcFRIsa4zNpSewOniExoijmmcnRKOaexMCC1gZnITSR8D6DyUyq0TKcrTnCChiVdS8aYY0K0wIbZn9CnnNEnpI34QeAMgBiJGAwxqqaT0ARp6hFmwnseViL5qHRWkLgrNQu4lEsTbyJ5eFkJ4s+UPaVWitg6C3w3L8mVQcUJdzAzFS/4wDJTFjggl5+rA6agnYV+ttg1LtLpTEvg4/MT/LL56Z4z6N1lUB8EJ5dB3gFlVBJey4JUbiuBaYpn2ZJCxptqbbvRBvPP2vMXueNi/rk02igAZMJWEyaIYTE3uQk5Ql5ZYok5ZCXPy8d7XtG8RDPNbdU2zduxaYXujZVd3zcLnz54ZWXnObXKLqg52lPZtVVM+Tug1lo1AmNJXMk0XWay3TG9thP342gihmlCtj9+BrUkF6zGzfxlSHiH08xA5qEq6IW1N+h5o11kXz9NYhyGWZ+VnRcDOHuKso5Lo/V59ncM36qGWUVWtak6Aeynur3kbFuuW2tJwNvytzCh83mKOsko5+oA9n5HTXC/xnBOneFOrQmuB3ySTXBx3mgoU8QlmelhOHmPZJKn3nfYxLM9x9iroto4XTopgvnJdCIWnm0/V9WiFL8UBaIUqtoxk7a61Fb12Do3ufaVmbrOK7mp7gjYl51wUyNg+0dcuO060VottkrOOldXrRLiC72vbdl2YJZXF6dfXzVPviEkBEnMOI1NPt+xHHjHEtMpzoDbxS3llVkhFc3zC716dxcs3RFaCUX3QBY6A/RAgC5n3YATeCYDBk1wlkDU0XnZFTid5gV6iSpwxs8pEpznnRjBgV1dQzsMN1lNCZ4Ji0+TX+VKD1mRJcct1wJWv7XyUHZJqQLSUbVhs67Uk0XB3swTpTT7uA6QXLvXBl4TycEOJNevv9sDsnsG8h4ge2cgf4+S+66J5H7QFZKFWP3wKm9zqp+vgfH/ \ No newline at end of file diff --git a/docs/img/architecture-restund.png b/docs/img/architecture-restund.png deleted file mode 100644 index 38851e501..000000000 Binary files a/docs/img/architecture-restund.png and /dev/null differ diff --git a/docs/img/flow-restund.png b/docs/img/flow-restund.png deleted file mode 100644 index a12ed7cfa..000000000 Binary files a/docs/img/flow-restund.png and /dev/null differ diff --git a/docs/img/flow-restund.swimlanesio b/docs/img/flow-restund.swimlanesio deleted file mode 100644 index 004f3aefe..000000000 --- a/docs/img/flow-restund.swimlanesio +++ /dev/null @@ -1,26 +0,0 @@ -title: Restund (How audio/video calls are established) - -_: **1. Discovery phase** - -Alice -> wire-server: where can I find a restund server? - -wire-server --> Alice: list of available servers - - -Bob -> wire-server: where can I find a restund server? -wire-server --> Bob: list of available servers - -_: **2. Establishing a call** - -Alice -> restund-server: establish restund connection - -Alice -> wire-server: (encrypted for Bob) message to Bob on where to find Alice's restund-server and how to connect to her -wire-server -> Bob: forward encrypted message to Bob - -Bob -> wire-server: (encrypted for Alice) message to Alice saying thank you, I will pick up your call now - -wire-server -> Alice: forward encrypted message to Alice - -Bob -> restund-server: establish restund connection - -note: At this point Alice and Bob are connected in an audio or video call diff --git a/docs/index.md b/docs/index.md deleted file mode 100644 index 251842027..000000000 --- a/docs/index.md +++ /dev/null @@ -1,11 +0,0 @@ -# Wire backend deployment - -This documentation is work in progress. - -* [Configuration](configuration.md) -* [Referencing helm charts from this repo](pending.md) -* [Monitoring](monitoring.md) -* [Minio](minio.md) -* [Cassandra](cassandra.md) -* [Elasticsearch](elasticsearch.md) -* [Restund](restund.md) diff --git a/docs/logging.md b/docs/logging.md deleted file mode 100644 index 52c611c7b..000000000 --- a/docs/logging.md +++ /dev/null @@ -1,110 +0,0 @@ -# Deploying logging for the staging cluster: - -## Prerequisites - -See the [development setup](https://github.com/wireapp/wire-server-deploy#development-setup) - -## Deploying ElasticSearch -``` -$ helm install --namespace wire/elasticsearch-ephemeral -``` - -Note that since we are not specifying a release name during helm install, it generates a 'verb-noun' pair, and uses it. -Elasticsearch's chart does not use the release name of the helm chart in the pod name, sadly. - - -## Deploying Kibana -``` -$ helm install --namespace wire/kibana -``` - -Note that since we are not specifying a release name during helm install, it generates a 'verb-noun' pair, and uses it. If you look at your pod names, you can see this name prepended to your pods in 'kubectl -n get pods'. - -## Deploying fluent-bit -``` -$ helm install --namespace wire/fluent-bit -``` - -## Configuring fluent-bit -Per pod-template, you can specify what parsers `fluent-bit` needs to use to interpret the pod's logs in a structured way. -By default, it just parses them as plain text. But, you can change this using a pod annotation. E.g.: -``` -apiVersion: v1 -kind: Pod -metadata: - name: brig - labels: - app: brig - annotations: - fluentbit.io/parser: json -spec: - containers: - - name: apache - image: edsiper/apache_logs -``` - -You can also define your own custom parsers in our `fluent-bit` chart's `values.yml`. For example, we have one defined for `nginz`. -For more info, see : https://github.com/fluent/fluent-bit-docs/blob/master/filter/kubernetes.md#kubernetes-annotations - - -Alternately, if there is already fluent-bit deployed in your environment, get the helm name for the deployment (verb-noun prepended to the pod name), and -``` -$ helm upgrade --namespace wire/fluent-bit -``` - -Note that since we are not specifying a release name during helm install, it generates a 'verb-noun' pair, and uses it. if you look at your pod names, you can see this name prepended to your pods in 'kubectl -n get pods'. - -## Post-install kibana setup. - -Get the pod name for your kibana instance (not the one set up with fluent-bit), and -``` -$ kubectl -n port-forward 5601:5601 -``` - -go to 127.0.0.1:5601 in your web browser. - -1. Click on 'discover'. -2. Use 'kubernetes_cluster-*' as the Index pattern. -3. Click on 'Next step' -4. Click on the 'Time Filter field name' dropdown, and select '@timestamp'. -5. Click on 'create index patern'. - -## Deploying ElasticSearch-Curator -``` -$ helm install --namespace wire/elasticsearch-curator -``` - -Note that since we are not specifying a release name during helm install, it generates a 'verb-noun' pair, and uses it. If you look at your pod names, you can see this name prepended to your pods in 'kubectl -n get pods'. - -ElasticSearch-curator trims the logs that are contained in kibana, so that your elasticsearch pod does not get too large, crash, and need re-built. - -## Usage: - -Get the pod name for your kibana instance (not the one set up with fluent-bit), and -``` -$ kubectl -n port-forward 5601:5601 -``` - -Go to 127.0.0.1:5601 in your web browser. - -Click on 'discover' to view data. - -## Nuking it all. - -Find the names of the helm releases for your pods (look at `helm ls` and `kubectl -n get pods` , and run `helm del --purge` for each of them. - -Note: Elasticsearch does not use the name of the helm chart, and therefore is harder to identify. - -## Debugging -``` -kubectl -n logs -``` - -# How this was developed: -First, we deployed elasticsearch with the elasticsearch-ephemeral chart, then kibana. then we deployed fluent-bit, which set up a kibana of it's own that looks broken. It had a kibana .tgz in an incorrect location. It also set up way more VMs than I thought, AND consumed the logs for the entire cluster, Rather than for the namespace it's contained in, as I expected. - -For kibana and fluent-bit, we created a shell of overides, with a dependency on the actual chart, so that when we helm dep update, helm grabs the chart from upstream, instead of bringing the source of the chart into our repository. -There were only three files to modify, which we copied from the fake-aws-s3 chart and modified: Chart.yaml, requirements.yaml, and values.yaml. - -For elasticsearch, we bumped the version number, because kibana was refusing to start, citing too old of a version of elasticsearch. it wants a 6.x, we use 5.x for brig, and for our kibana/logserver setup. later, we forced integration tests against the new elasticsearch in confluence. - diff --git a/docs/maintainers.md b/docs/maintainers.md deleted file mode 100644 index 75a109b3a..000000000 --- a/docs/maintainers.md +++ /dev/null @@ -1,14 +0,0 @@ -# Maintainers of wire-server-deploy - -Apart from the usual development setup, you'll additionally need [yq](https://github.com/mikefarah/yq) on your PATH. - -For local development, instead of `helm install wire/`, use - -``` -./bin/update.sh ./charts/ # this will clean and re-package subcharts -helm install charts/ # specify a local file path -``` - -## ./bin/sync.sh - -This script is used to mirror the contents of this github repository with S3 to make it easier for us and external people to use helm charts. You may need to run that manually after bumping versions. diff --git a/docs/minio.md b/docs/minio.md deleted file mode 100644 index c982d69e3..000000000 --- a/docs/minio.md +++ /dev/null @@ -1,14 +0,0 @@ -## Interacting with minio - -If you installed minio with the ansible playbook from this repo, you can interact with it like this: - - -``` -# from a minio machine - -mc config host add server1 http://localhost:9091 - -mc admin info server1 -``` - -For more information, see the [minio documentation](https://docs.min.io/) diff --git a/docs/monitoring.md b/docs/monitoring.md deleted file mode 100644 index 7c749eab0..000000000 --- a/docs/monitoring.md +++ /dev/null @@ -1,198 +0,0 @@ -# Monitoring - - - -* [Prerequisites](#prerequisites) -* [Installation](#installation) -* [Adding Dashbaords](#adding-dashbaords) -* [Monitoring in a separate namespace](#monitoring-in-a-separate-namespace) -* [Using Custom Storage Classes](#using-custom-storage-classes) -* [Troubleshooting](#troubleshooting) -* [Monitoring without persistent disk](#monitoring-without-persistent-disk) -* [Using custom storage classes](#using-custom-storage-classes-1) -* [Accessing grafana](#accessing-grafana) -* [Accessing prometheus](#accessing-prometheus) - - - -## Prerequisites - -See the [development setup](https://github.com/wireapp/wire-server-deploy#development-setup) - -## Installation - -The following instructions detail the installation of a monitoring system consisting -of a Prometheus instance and corresponding Alert Manager in addition to a Grafana -instance for viewing dashboards related to cluster and wire-services health. - -If you wish to add custom overrides you can create a values file and pass it alongside -all of the following `helm` commands using `-f values/wire-server-metrics/demo-values.yaml`: - -Creating an override file: - -```bash -cp values/wire-server-metrics/demo-values.example.yaml values/wire-server-metrics/demo-values.yaml -``` - -The monitoring system requires disk space if you wish to be resilient to pod -failure. If you are deployed on AWS you may install the `aws-storage` helm -chart which provides configurations of Storage Classes for AWS's elastic block -storage (EBS). If you're not using AWS, instead of using `aws-storage`, you -need to provide your [custom storage class](#using-custom-storage-classes). - -First we install the Storage Classes via the `aws-storage` chart: - -``` -helm upgrade --install demo-aws-storage wire/aws-storage \ - --namespace demo \ - --wait -``` - -Next we can install the monitoring suite itself - -There are a few known issues surrounding the `prometheus-operator` helm chart. - -You will likely have to install the Custom Resource Definitions manually before -installing the `wire-server-metrics` chart: - -``` -kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/d34d70de61fe8e23bb21f6948993c510496a0b31/example/prometheus-operator-crd/alertmanager.crd.yaml -kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/d34d70de61fe8e23bb21f6948993c510496a0b31/example/prometheus-operator-crd/prometheus.crd.yaml -kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/d34d70de61fe8e23bb21f6948993c510496a0b31/example/prometheus-operator-crd/prometheusrule.crd.yaml -kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/d34d70de61fe8e23bb21f6948993c510496a0b31/example/prometheus-operator-crd/servicemonitor.crd.yaml -``` - -Now we can install the metrics chart; from the root of the `wire-server-deploy` -repository run the following: - -``` -./bin/update.sh ./charts/wire-server-metrics -helm upgrade --install demo-wire-server-metrics wire/wire-server-metrics \ - --namespace demo \ - --wait -``` - -See the [Prometheus Operator -README](https://github.com/helm/charts/tree/master/stable/prometheus-operator#work-arounds-for-known-issues) -for more information and troubleshooting help. - -## Adding Dashbaords - -Grafana dashbaord configurations are included as JSON inside the -`charts/wire-server-metrics/dashboards` directory. You may import these via -Grafana's web UI. See [Accessing grafana](#accessing-grafana). - -## Monitoring in a separate namespace - -It is advisable to separate your monitoring services from your application services. -To accomplish this you may deploy `wire-server-metrics` into a separate namespace from -`wire-server`. Simply provide a different namespace to the `helm upgrade --install` calls. - -This chart will monitor all wire services across _all_ namespaces. - -## Using Custom Storage Classes - -If you're using a provider other than AWS please reference the [Kubernetes -documentation on storage -classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) for -configuring a storage class for your kubernetes cluster. - -## Troubleshooting - -If you receive the following error: - -``` -Error: validation failed: [unable to recognize "": no matches for kind "Alertmanager" in version -"monitoring.coreos.com/v1", unable to recognize "": no matches for kind "Prometheus" in version -"monitoring.coreos.com/v1", unable to recognize "": no matches for kind "PrometheusRule" in version -``` - -Please run the script to install Custom Resource Definitions which is detailed in -the installation instructions above. - ---- - -When upgrading you may see the following error: - -``` -Error: object is being deleted: customresourcedefinitions.apiextensions.k8s.io "prometheusrules.monitoring.coreos.com" already exists -``` - -Helm sometimes has trouble cleaning up or defining Custom Resource Definitions. -Try manually deleting the resource definitions and trying your helm install again: - -``` -kubectl delete customresourcedefinitions \ - alertmanagers.monitoring.coreos.com \ - prometheuses.monitoring.coreos.com \ - servicemonitors.monitoring.coreos.com \ - prometheusrules.monitoring.coreos.com -``` - -## Monitoring without persistent disk - -If you wish to deploy monitoring without any persistent disk (not recommended) -you may add the following overrides to your `values.yaml` file. - -```yaml -prometheus-operator: - grafana: - persistence: - enabled: false - prometheusSpec: - storageSpec: null - alertmanager: - alertmanagerSpec: - storage: null -``` - -## Using custom storage classes - -If you wish to use a different storage class (for instance if you don't run on AWS) -you may add the following overrides to your `values.yaml` file. - -```yaml -prometheus-operator: - grafana: - persistence: - storageClassName: "" - prometheusSpec: - storageSpec: - volumeClaimTemplate: - spec: - storageClassName: "" - alertmanager: - alertmanagerSpec: - storage: - volumeClaimTemplate: - spec: - storageClassName: "" -``` - -## Accessing grafana - -Forward a port from your localhost to the grafana service running in your cluster: - -``` -kubectl port-forward service/-grafana 3000:80 -n -``` - -Now you can access grafana at `http://localhost:3000` - -The username and password are stored in the `grafana` secret of your namespace - -By default this is: - -- username: `admin` -- password: `admin` - -## Accessing prometheus - -Forward a port from your localhost to the prometheus service running in your cluster: - -``` -kubectl port-forward service/-prometheus 9090:9090 -n -``` - -Now you can access prometheus at `http://localhost:9090` - diff --git a/docs/pending.md b/docs/pending.md deleted file mode 100644 index 790a101a4..000000000 --- a/docs/pending.md +++ /dev/null @@ -1,20 +0,0 @@ -### Referencing helm charts from this repo - -After [this issue](https://github.com/hypnoglow/helm-s3/issues/45) is solved, charts can be referenced publicly. Currently, this does not work. Instead, you'll need to checkout this repo and run `./bin/update.sh ` for each chart before use. - - diff --git a/values/nginx-ingress-services/demo-values.example.yaml b/values/nginx-ingress-services/demo-values.example.yaml index 45e458cb9..3e03a2e3d 100644 --- a/values/nginx-ingress-services/demo-values.example.yaml +++ b/values/nginx-ingress-services/demo-values.example.yaml @@ -11,6 +11,6 @@ config: https: nginz-https.example.com ssl: nginz-ssl.example.com webapp: webapp.example.com - fakeS3: s3.example.com - teamSettings: team.example.com + fakeS3: assets.example.com + teamSettings: teams.example.com accountPages: account.example.com diff --git a/values/nginx-ingress-services/prod-values.example.yaml b/values/nginx-ingress-services/prod-values.example.yaml index 7acd2d4ff..b3d3860c1 100644 --- a/values/nginx-ingress-services/prod-values.example.yaml +++ b/values/nginx-ingress-services/prod-values.example.yaml @@ -12,8 +12,8 @@ config: https: nginz-https.example.com ssl: nginz-ssl.example.com webapp: webapp.example.com - fakeS3: s3.example.com - teamSettings: team.example.com + fakeS3: assets.example.com + teamSettings: teams.example.com accountPages: account.example.com service: diff --git a/values/wire-server/demo-values.example.yaml b/values/wire-server/demo-values.example.yaml index 70f06d131..234f7d905 100644 --- a/values/wire-server/demo-values.example.yaml +++ b/values/wire-server/demo-values.example.yaml @@ -69,7 +69,7 @@ cargohold: # change if using real AWS s3Bucket: dummy-bucket s3Endpoint: http://fake-aws-s3:9000 - s3DownloadEndpoint: https://bare-s3.example.com + s3DownloadEndpoint: https://assets.example.com galley: replicaCount: 1 @@ -122,10 +122,10 @@ webapp: # tag: # some-tag (only override if you want a newer/different version than what is in the chart) config: externalUrls: - backendRest: bare-https.example.com - backendWebsocket: bare-ssl.example.com + backendRest: nginz-https.example.com + backendWebsocket: nginz-ssl.example.com backendDomain: example.com - appHost: bare-webapp.example.com + appHost: webapp.example.com team-settings: replicaCount: 1 @@ -133,10 +133,10 @@ team-settings: # tag: # some-tag (only override if you want a newer/different version than what is in the chart) config: externalUrls: - backendRest: bare-https.example.com - backendWebsocket: bare-ssl.example.com + backendRest: nginz-https.example.com + backendWebsocket: nginz-ssl.example.com backendDomain: example.com - appHost: bare-webapp.example.com + appHost: webapp.example.com account-pages: replicaCount: 1 @@ -144,6 +144,6 @@ account-pages: # tag: # some-tag (only override if you want a newer/different version than what is in the chart) config: externalUrls: - backendRest: bare-https.example.com + backendRest: nginz-https.example.com backendDomain: example.com - appHost: bare-webapp.example.com + appHost: webapp.example.com diff --git a/values/wire-server/prod-values.example.yaml b/values/wire-server/prod-values.example.yaml index a2859a614..239dd4f0b 100644 --- a/values/wire-server/prod-values.example.yaml +++ b/values/wire-server/prod-values.example.yaml @@ -43,6 +43,18 @@ brig: general: emailSender: email@example.com # change this smsSender: "insert-sms-sender-for-twilio" # change this if SMS support is desired + templateBranding: # change all of these, they are used in emails + brand: Wire + brandUrl: https://wire.com + brandLabel: wire.com + brandLabelUrl: https://wire.com + brandLogoUrl: https://wire.com/p/img/email/logo-email-black.png + brandService: Wire Service Provider + copyright: © WIRE SWISS GmbH + misuse: misuse@wire.com + legal: https://wire.com/legal/ + forgot: https://wire.com/forgot/ + support: https://support.wire.com/ user: passwordResetUrl: https://account.example.com/reset/?key=${key}&code=${code} activationUrl: https://account.example.com/verify/?key=${key}&code=${code} @@ -84,7 +96,7 @@ cargohold: # change if using real AWS s3Bucket: assets s3Endpoint: http://minio-external:9000 - s3DownloadEndpoint: https://s3.example.com + s3DownloadEndpoint: https://assets.example.com galley: replicaCount: 3