You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Team, we've had two clusters on EKS - the first one is using managed nodes which worked fine prior to its decommissioning. The second one is using EKS fargate. I've noticed that the Fargate nodes takes substantially longer to pull an image, both for deployment of the Workbench pod and session pods. Typically this takes approximately 10 minutes. Images are not cached either, so a new image pull is created on every session launch, resulting in each session taking ~10 minutes to start. See below:
@Cecilsingh Using Fargate on EKS stands up a brand new Kubernetes node for each pod that is scheduled using a Fargate Profile. Since it is a new node, it starts with an empty image cache.
Amazon EKS Fargate adds defense-in-depth for Kubernetes applications by isolating each Pod within a Virtual Machine (VM). This VM boundary prevents access to host-based resources used by other Pods in the event of a container escape, which is a common method of attacking containerized applications and gain access to resources outside of the container.
Unfortunately, the startup time is a combination of the time it takes for AWS to create a new compute instance, add it to the Kubernetes cluster, and pull the image.
I noticed that the creation of each "node" takes about 45 seconds, which is the same no matter which image is used. Using nginx as an example, this takes around 45 seconds:
This also works for our Workbench images, in that the allocation of a new node takes ~45 seconds. Strangely, the image pull for Workbench in fargate takes almost 10 minutes. The longest time seems to be allocated to the pulling of the image. Is this an expected behaviour for Fargate? This didn't seem to happen when using managed nodes on EKS!
Hi Team, we've had two clusters on EKS - the first one is using managed nodes which worked fine prior to its decommissioning. The second one is using EKS fargate. I've noticed that the Fargate nodes takes substantially longer to pull an image, both for deployment of the Workbench pod and session pods. Typically this takes approximately 10 minutes. Images are not cached either, so a new image pull is created on every session launch, resulting in each session taking ~10 minutes to start. See below:
The values.yaml file is also very minimal:
Is this a known issue with EKS Fargate, or are there any additional components needed for Workbench to work with EKS fargate?
The text was updated successfully, but these errors were encountered: