-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make the application container resources in Kubernetes user-decidable #54
Comments
@manuelstein , @paarijaat , @ruichuan, any thoughts? |
In my opinion, this values.yaml is the place where one could make changes
to better adapt to its infrastructure. Are you suggesting we should have a
settings file like what we do for ansible installation? I feel that might
end up with a similar file as the current values.yaml ;)
…On Wed, Jul 15, 2020 at 2:11 PM Istemi Ekin Akkus ***@***.***> wrote:
@manuelstein <https://github.com/manuelstein> , @paarijaat
<https://github.com/paarijaat> , @ruichuan <https://github.com/ruichuan>,
any thoughts?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#54 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAQSEDBW56YFKDDKUABDXMDR3WMG3ANCNFSM4OGXJXNQ>
.
|
No, I mean for the end users to configure how their workflows could be deployed (i.e., with what kind of container resources). This would be similar to the way AWS Lambda users would pick their configuration with different RAM. |
The limits of a container would depend on the functions that are deployed in it and the targeted level of concurrency that the container should be able to handle. Based on this, scaling thresholds can be configured. If the concurrency level is low, scaling might start to scale-out early, and response times might suffer during the scale-out. If provided resources allow high concurrency, we accept over-provisioning for better response times. Should we allow the user to provide an initial resource estimate per function instance? |
Hi Manuel, I think whenever a service provide in the future is interested in billing for KNIX FaaS-like services he will need this type of information - same as for other services like object storer to.
I share your view on allowing a user to provide an estimate on memory usage, but what will happen if the estimated memory consumption is wrong/exceeded? In case of AWS Lambda e.g. such a function could either simply fail, or not terminate within the container lifetime constraints, do we want e.g. give an extra memory allowance just in case?
Best regards, Klaus
Holen Sie sich Outlook für iOS<https://aka.ms/o0ukef>
…________________________________
Von: Manuel Stein <[email protected]>
Gesendet: Wednesday, July 15, 2020 7:58:37 PM
An: knix-microfunctions/knix <[email protected]>
Cc: Subscribed <[email protected]>
Betreff: Re: [knix-microfunctions/knix] Make the application container resources in Kubernetes user-decidable (#54)
The limits of a container would depend on the functions that are deployed in it and the targeted level of concurrency that the container should be able to handle. Based on this, scaling thresholds can be configured. If the concurrency level is low, scaling might start to scale-out early, and response times might suffer during the scale-out. If provided resources allow high concurrency, we accept over-provisioning for better response times.
Should we allow the user to provide an initial resource estimate per function instance?
Could we report/collect the CPU time and memory use after a function invocation has completed?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub<#54 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ADFTJENLMV52K72PIIM3RGTR3XU43ANCNFSM4OGXJXNQ>.
|
Oh, my bad -- missed the "user configuration" part... Then, it's
generally a good thing to have. The question is that, whether a user
really knows how much resource its function/workflow needs, or should
knix profile the resource usage of a function/workflow for the user?
…On Wed, Jul 15, 2020 at 5:13 PM Istemi Ekin Akkus ***@***.***> wrote:
No, I mean for the end users to configure how their workflows could be deployed (i.e., with what kind of container resources). This would be similar to the way AWS Lambda users would pick their configuration with different RAM.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
Yes, that is true. I forgot to consider that the users shouldn't have to reason about the resource usage of their functions. I think it would be good if we could measure each function instance's CPU and memory usage. It would help to adjust the scaling policy for the kubernetes setup as well as determining the scaling policy for the bare-metal case (which is something we should also work on). We can start gathering statistics about already deployed functions. When we need to scale out a deployed workflow, we can adjust the resources accordingly for the new scaled instance (e.g., this sandbox had 1CPU and 1GB RAM, and i am now scaling it up for 3rd time in the last 5 minutes, so maybe I'll just give it more CPU and RAM this time). Similarly, if this function is used in another workflow that is being deployed, we can also consider that information. Then during deployment, we could pick the resources of the sandbox according to some heuristics with regards the functions and states utilized in the workflow. For example, if the workflow contains a 'Parallel' or 'Map' state, we pick a sandbox flavour with more CPU and RAM. That said, how about we have a few different flavours of sandboxes with different RAM and CPU configurations (e.g., [1 CPU, 1 GB], [1 CPU, 2 GB], [2 CPUs, 4GB]) to choose from? Or we could just have dynamic values in the deployWorkflow procedure. My original point was that these values are currently fixed and determined at installation time, and not adjustable afterwards during runtime, so we should try to make it more dynamic. |
See commit f347d3eb986f4a340340fe149e017a220060731e for the management functionality that allows this configuration for kubernetes environments. Once we have the corresponding changes in the following, we can close this issue.
|
Currently, the resources assigned to an application container in Kubernetes are static and defined in
deploy/helm/microfunctions/values.yaml
. I think these values should be configurable, so that during the workflow deployment the user can pick how much resources should be assigned to her application's containers.The text was updated successfully, but these errors were encountered: