Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make the application container resources in Kubernetes user-decidable #54

Open
iakkus opened this issue Jun 24, 2020 · 8 comments
Open
Labels
env/kubernetes To indicate something specific to Kubernetes setup of KNIX feature_request New feature request improvement Improvements to an existing component

Comments

@iakkus
Copy link
Member

iakkus commented Jun 24, 2020

Currently, the resources assigned to an application container in Kubernetes are static and defined in deploy/helm/microfunctions/values.yaml. I think these values should be configurable, so that during the workflow deployment the user can pick how much resources should be assigned to her application's containers.

@iakkus iakkus added feature_request New feature request improvement Improvements to an existing component env/kubernetes To indicate something specific to Kubernetes setup of KNIX labels Jun 24, 2020
@iakkus
Copy link
Member Author

iakkus commented Jul 15, 2020

@manuelstein , @paarijaat , @ruichuan, any thoughts?

@ruichuan
Copy link
Collaborator

ruichuan commented Jul 15, 2020 via email

@iakkus
Copy link
Member Author

iakkus commented Jul 15, 2020

No, I mean for the end users to configure how their workflows could be deployed (i.e., with what kind of container resources). This would be similar to the way AWS Lambda users would pick their configuration with different RAM.

@iakkus iakkus changed the title Make the application container resources in Kubernetes configurable Make the application container resources in Kubernetes user-decidable Jul 15, 2020
@manuelstein
Copy link
Collaborator

The limits of a container would depend on the functions that are deployed in it and the targeted level of concurrency that the container should be able to handle. Based on this, scaling thresholds can be configured. If the concurrency level is low, scaling might start to scale-out early, and response times might suffer during the scale-out. If provided resources allow high concurrency, we accept over-provisioning for better response times.

Should we allow the user to provide an initial resource estimate per function instance?
Could we report/collect the CPU time and memory use after a function invocation has completed?

@ksatzke
Copy link
Collaborator

ksatzke commented Jul 15, 2020 via email

@ruichuan
Copy link
Collaborator

ruichuan commented Jul 15, 2020 via email

@iakkus
Copy link
Member Author

iakkus commented Jul 16, 2020

Yes, that is true. I forgot to consider that the users shouldn't have to reason about the resource usage of their functions.

I think it would be good if we could measure each function instance's CPU and memory usage. It would help to adjust the scaling policy for the kubernetes setup as well as determining the scaling policy for the bare-metal case (which is something we should also work on).

We can start gathering statistics about already deployed functions. When we need to scale out a deployed workflow, we can adjust the resources accordingly for the new scaled instance (e.g., this sandbox had 1CPU and 1GB RAM, and i am now scaling it up for 3rd time in the last 5 minutes, so maybe I'll just give it more CPU and RAM this time). Similarly, if this function is used in another workflow that is being deployed, we can also consider that information.

Then during deployment, we could pick the resources of the sandbox according to some heuristics with regards the functions and states utilized in the workflow. For example, if the workflow contains a 'Parallel' or 'Map' state, we pick a sandbox flavour with more CPU and RAM.

That said, how about we have a few different flavours of sandboxes with different RAM and CPU configurations (e.g., [1 CPU, 1 GB], [1 CPU, 2 GB], [2 CPUs, 4GB]) to choose from? Or we could just have dynamic values in the deployWorkflow procedure.

My original point was that these values are currently fixed and determined at installation time, and not adjustable afterwards during runtime, so we should try to make it more dynamic.

@iakkus
Copy link
Member Author

iakkus commented May 11, 2021

See commit f347d3eb986f4a340340fe149e017a220060731e for the management functionality that allows this configuration for kubernetes environments.

Once we have the corresponding changes in the following, we can close this issue.

  • SDK
  • Cli
  • GUI

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
env/kubernetes To indicate something specific to Kubernetes setup of KNIX feature_request New feature request improvement Improvements to an existing component
Projects
None yet
Development

No branches or pull requests

4 participants