Playground is a Kubernetes-based environment for exploring the capabilities of Aperture. Additionally, it is used as a development environment for Aperture. Playground uses Tilt for orchestrating the deployments in Kubernetes. Tilt watches for changes to local files and auto-deploys any resources that change. This is convenient for getting quick feedback during development of Aperture.
Playground deploys resources to the Kubernetes cluster that kubectl
on your
machine points at. For convenience, refer to Prerequisites
for deploying a local Kubernetes cluster using
Kind.
Assuming that you have already cloned the aperture repository and brought up a local Kubernetes cluster, proceed to install the required tools. To bring up the Playground, run the following commands:
$ git clone https://github.com/fluxninja/aperture.git
# change directory to playground
$ cd aperture/playground
# start a local kubernetes cluster
$ ctlptl apply -f ctlptl-kind-config.yaml
# start Tilt and run services defined in Tiltfile
$ tilt up
Tilt started on http://localhost:10350/
v0.30.3, built 2022-06-06
(space) to open the browser
(s) to stream logs (--stream=true)
(t) to open legacy terminal mode (--legacy=true)
(ctrl-c) to exit
Now, press Space to open the Tilt UI in your default browser.
📍 Verify that nothing else is running on the ports forwarded by
Tilt
.
The above command starts an Aperture Controller and an Aperture Agent on each worker node in the local Kubernetes cluster. Additionally, it starts a Java-based demo application with Aperture Java SDK configured to integrate with Aperture. There is an instance of Grafana running on the cluster as well for viewing metrics from experiments.
The Playground's default scenario is demonstrating Basic Service Protection with a combination of Rate-Limiting Actuator to dynamically rate-limit traffic from unwanted users, which protects the demo application against sudden surges in traffic load. You can verify it using the following command:
$ kubectl get policy -n aperture-controller rate-limit-escalation
NAME STATUS AGE
rate-limit-escalation uploaded 41s
The Playground includes a demo application so that you can generate simulated
traffic and see the policy in action. The demo application can be found in the
demoapp
namespace. You can read more about the demo application
here.
$ kubectl get pods -n demoapp
NAME READY STATUS RESTARTS AGE
service1-demo-app-54f6549446-ct8k9 2/2 Running 0 7m14s
service1-demo-app-54f6549446-r4mmq 2/2 Running 0 7m14s
service2-demo-app-759bbcc899-kxgwj 2/2 Running 0 7m13s
service2-demo-app-759bbcc899-njpxj 2/2 Running 0 7m13s
service3-demo-app-788857c7cc-557zj 2/2 Running 0 7m13s
service3-demo-app-788857c7cc-vlchn 2/2 Running 0 7m13s
flowchart LR
subgraph loadgen [Load Generator]
direction LR
k6([k6])
end
subgraph demoapp [Demo Application]
direction LR
s1[[service1]]
s2[[service2]]
s3[[service3]]
s1 ==> s2 ==> s3
end
subgraph agent [Aperture Agent]
direction TB
f1[Flux Meter]
r1[Rate</br>Limiter]
c1[Load</br>Scheduler]
end
k6 ==> s1
s3 --> f1
s1 --> r1 --> c1
The above diagram shows the interaction between different services and the policy running on Aperture Agent:
service1
callsservice2
, which then callsservice3
. This call graph is programmed in the request payload of the traffic generator.service3
(the last service in the call graph) simulates concurrency constraint by limiting the number of requests it can process in parallel.- Each service simulates an artificial workload by taking a few milliseconds to reply to a request.
- The Flux Meter is configured on
service3
. The Flux Meter helps monitor service-level health signals such as latency, which are used in the Basic Service Protection policy. - Load scheduler and Rate Limiter are configured on
service1
. So, when theservice3
is overloaded, load scheduling happens onservice1
.
Once all the resources are in the running state, simulated traffic will start getting generated automatically against the demo application. The traffic is designed to overload the demo application to showcase the capabilities of Aperture.
The load generator is configured to generate the following traffic pattern for
subscriber
, guest
and crawler
traffic types:
- Ramp up to
5
concurrent users in10s
. - Hold at
5
concurrent users for2m
. - Ramp up to
30
concurrent users in1m
(overloadsservice3
). - Hold at
30
concurrent users for2m
(overloadsservice3
). - Ramp down to
5
concurrent users in10s
. - Hold at
5
concurrent users for2m
.
Once the traffic is running, you can visualize the decisions made by Aperture in
Grafana. Navigate to localhost:3333 on your browser to
reach Grafana. You can open the FluxNinja
dashboard under aperture-system
folder to a bunch of useful panels.
📍 Grafana's dashboard browser address is localhost:3333/dashboards
To stop the traffic at any point of time, press the Stop Wavepool Generator
button in the DemoApplications
resource.
To re-start the traffic, press the Start Wavepool Generator
button in the
DemoApplications
resource.
📍 To manually run the traffic, please press the
Stop Wavepool Generator
button first to stop the automatic runner.
The Playground environment assumes usage of specific deployment and configuration management tools, which must be installed beforehand.
To install the required tools, you have two options:
- Use
asdf
- Or, manually install the tools mentioned here.
First,
download and
install
asdf
. Then, run the following command in the playground directory to install
all the required tools.
./scripts/install_tools.sh
📍 Please skip this section in case you already installed the required tools using
asdf
.
Tools required are listed below
- Helm: it is a package manager for Kubernetes. To install manually, follow instructions here.
- Tanka and Jsonnet Bundler: Grafana Tanka is a robust configuration utility for your Kubernetes cluster, powered by the unique Jsonnet language. Jsonnet Bundler is used to manage Jsonnet dependencies. To install manually, follow instructions here.
- Kind: This allows you to run local Kubernetes clusters. To install manually, follow instructions here.
- kubectl: It's the command-line tool to interact with Kubernetes clusters. To install manually, follow instructions here.
In the case of local deployments and development work, it is nice to be able to automatically rebuild images and services. Aperture Playground uses Tilt to achieve this.
Tilt can be installed using asdf
or manually by
following instructions here.
📍 You can skip this section if you already have a running cluster which is being pointed by the
kubectl
.
Create a Kubernetes cluster using Kind with a configuration file by executing the following command from aperture home directory:
kind create cluster --config playground/kind-config.yaml
This will start a cluster with the name aperture-playground
.
Once done, you can delete the cluster with the following command:
kind delete cluster --name aperture-playground
Alternatively, you can use ctlptl
to
start a cluster with a built-in local registry for Docker images:
ctlptl apply -f playground/ctlptl-kind-config.yaml
Once done, you can delete the cluster and registry with the following command:
ctlptl delete -f playground/ctlptl-kind-config.yaml
Simply run tilt up
from the playground
directory - it'll automatically start
building and deploying.
You can reach the web UI by going to http://localhost:10350 or pressing (Space).
Tilt should automatically detect new changes to the services, rebuild and re-deploy them.
Useful flags:
-
--port
orTILT_PORT
- the port on which web UI should listen -
--stream
- will stream both Tilt and pod logs to a terminal (useful for debugging Tilt itself) -
--legacy
- if you want a basic, terminal-based frontend
By default, tilt
will deploy and manage the Agent and Controller.
If you want to limit it to only manage some namespace(s) or resource(s), simply pass their name(s) as an additional argument(s).
Examples:
tilt up aperture-grafana
brings up the Grafana service and its dependent services, such asgrafana-operator
.tilt up agent demoapp aperture-grafana
- you can mix namespace names and resource names, as well as specify as many of them as you want.
If you want to manage only explicitly passed resources or namespaces, you should
pass the --only
argument:
tilt up -- --only aperture-grafana
- only bring up Grafana, namespace resolving to resources still works
To view the available namespaces and resources, either:
- run
tilt up --stream -- --list-resources
- read the
DEP_TREE
at the top ofTiltfile
To disable automatic rebuilding in Tilt
, add --manual
with the command.
Simply run tilt down
. All created resources will be deleted.
Tilt will automatically set up forwarding for the services.
Below is the mapping of the ports being forwarded by Tilt:
Component | Container Port | Local Port |
---|---|---|
Prometheus | 9090 | 9090 |
etcd | 2379 | 2379 |
Grafana | 3000 | 3000 |
By default, playground is started with a simple demo scenario loaded. The demo application includes three sets of pods and services. There is also a java rate limiting escalation policy applied to them, and K6 load generator pattern created. When the entire deployment turns green, the load generator can be started with the "Start Wavepool Generator" button in the Tilt UI. It will run a 2-minute test in a loop, until the "Stop Wavepool Generator" button is not clicked.
There are other playground scenarios under the playground/scenarios/
directory, and they can be loaded during Tilt
setup by passing a relative path
to the scenario, e.g. tilt up -- --scenario scenarios/load-ramping
📍 You can skip building of aperture container images to speed up your work on the scenario, by passing
-- --dockerhub-image
to thetilt up
command. In that case, the latest images will be pulled from DockerHub and used instead.
rate_limiting_escalation
├── dashboards
│ └── main.jsonnet
├── load-generator
│ └── test.js
├── metadata.json
└── policies
├── service1-demo-app-cr.yaml
└── service1-demo-app.yaml
Each test scenario consists of a few directories, for policies, dashboards and load generator configuration:
metadata.json
describes the test scenario, what images to build, what Tilt dependencies to add and so on. See existing test scenarios, as well asTiltfile
, for examples of how to prepare this file.policies/service1-demo-app.yaml
is a values.yaml file for the given policy listed inmetadata.json
underaperture_policies
key.load-generator/test.js
is configuration for the K6 load generator.
If you are getting the following message in cluster container:
failed to create fsnotify watcher: too many open files
If sysctl fs.inotify.{max_queued_events,max_user_instances,max_user_watches}
less than:
fs.inotify.max_queued_events=16384
fs.inotify.max_user_instances=1024
fs.inotify.max_user_watches=524288
change it, using (temporary method):
sudo sysctl fs.inotify.max_queued_events=16384
sudo sysctl fs.inotify.max_user_instances=1024
sudo sysctl fs.inotify.max_user_watches=524288
or add the following lines to /etc/sysctl.conf
:
fs.inotify.max_queued_events=16384
fs.inotify.max_user_instances=1024
fs.inotify.max_user_watches=524288
For better understanding of Aperture, refer to the following resources.