Skip to content
Tom Barnes edited this page Jul 18, 2019 · 12 revisions

THIS IS AN EARLY DRAFT UNDER CONSTRUCTION

WebLogic Server Certification

Q: Which JEE profiles are supported/certified on k8s? only Web Profile or WLS JEE full blown?

A: We support the full Java EE Profile


Q: Are XA transactions and recovery also supported? Any customer using this on JCS?

A: Yes XA transactions are supported, we are expanding our certification to include more complex cross domain XA transaction use cases.


WebLogic Server Configuration

Q: How is the WebLogic Server domain configured in a docker container (e.g. databases, jms, etc.) that is potentially shared by many domains?

A: In a Kubernetes & Docker environments the WebLogic domain is externalized to a persistent volume.

Sample WebLogic domain in Docker (https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/12213-domain).

Sample WebLogic domain on Kubernetes (https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain)

Kubernetes pods and Docker containers are ephemeral, in order to allow for pod/container/server mobility we externalize state (domain home, logs, stores) to persistent volumes.
There are different ways to configure a WebLogic domain, you can use WLST, REST and in the near future an Oracle team is developing a tool called the WebLogic Deploy Tooling which takes the WebLogic configuration (including JMS, Data Sources, applications, authenticators, etc.) and creates a yaml file and a zip file (which include the binaries). A Kubernetes Job takes the yaml file and zip and creates the domain in the persistent volume, making it easy to take existent deployments of WebLogic and move them to Kubernetes platforms.


Q: Is the admin server required? Node manager?

A: Certification of both WebLogic on Docker and WebLogic in Kubernetes consists of a WebLogic domain with the Admin server. The use of the Node Manager is different if we are talking about Docker or Kubernetes.

  • In Docker the Node Manager runs in the same container as the Managed Server to define a WLS machine and to allow the Admin Server start and stop the Managed Servers.
  • When we use the WebLogic Kubernetes Operator to manage the WebLogic domain the Node Manager is used internally by the Operator in the Kubernetes liveliness probe, to start the server, and stop the server gracefully.

Communications

Q: How is Location Transparency achieved and the communication between WLS instances handled? T3s?

A: Inside the Kubernetes cluster we use the Cluster IP (acts as a DNS name) and Ingress controller which allows for the pods with WebLogic servers to move to different Nodes in the Kubernetes cluster and continue communicating with other servers.
For T3 communication outside the Kubernetes cluster we use NodePort and configure a WebLogic channel for T3 RMI. Please read the blog https://blogs.oracle.com/weblogicserver/t3-rmi-communication-for-weblogic-server-running-on-kubernetes which explains in detail RMI T3 communication. All QOS are supported for T3 and HTTP communication (SSL, Admin).


Q: Are clusters supported on k8s using both multicast and unicast?

A: Only unicast is supported. Most Kubernetes network fabrics do not support multicast communication. Weave claims to support multicast but it is a commercial network fabric. We have only certified on Flannel which only supports unicast.


Q: Docker resources: we do use an SSL listener, an SSL port and an Admin port on each instance. How to map these resources?

A: Yes, you can use SSL listener, SSL port, Admin port in both Docker and Kubernetes. Please refer to blog https://blogs.oracle.com/weblogicserver/t3-rmi-communication-for-weblogic-server-running-on-kubernetes which describes how to map these ports in Kubernetes.


Load Balancers

Q: Load Balancing and Failover inside a DataCenter (HTTPS and T3s)

A: We originally certified with Traefik with the Kubernetes cluster, this is a very basic load balancer. We are in the process of certifying with other more sophisticated load balancers like HTTP proxy plugin, Nginx, OHS.


Lifecycle and Scaling

Q: How to deal with grow and shrink? Cluster and non-cluster mode

A: You can scale and shrink a configured WebLogic cluster (pre-configured number of managed servers) using different methods, please see the blog https://blogs.oracle.com/weblogicserver/automatic-scaling-of-weblogic-clusters-on-kubernetes-v2

  • Manually using Kubernetes command line kubectl
  • WLDF rules and policies, when the rule is met the Admin server sends a REST call to the Operator which calls the Kubernetes API to start a new pod/container/server.
  • We have developed and open sourced the WebLogic Monitoring Exporter which exports WebLogic metrics to Prometheus and Grafana. In Prometheus you can set rules similar to WLDF and when these rules are met a REST call is made to the Operator which invokes a K8S API to start a new pod. Please refer to blog https://blogs.oracle.com/weblogicserver/announcing-the-new-weblogic-monitoring-exporter-v2

Very soon we will support WebLogic Dynamic clusters in Kubernetes.


Q: Container-lifecycle: how to properly spin-up and gracefully shutdown wls in a container?

A: The Operator manages container/pod/WebLogic server lifecycle automatically; it uses the Node Manager (internally) to do the following operations

  • ENTRYPOINT - start WebLogic server
  • Liveliness probe – check if the the WebLogic server is alive
  • Readiness probe – query if the WebLogic server is ready to receive requests
  • Shutdown Hook – gracefully shutdown the WebLogic server

These operations can also be done manually using the Kubernetes command line interface kubectl.


Q: For binding EJB’s (presentation-/business-tier) are unique and/or dynamic domain-names used?

A: We do not enforce unique domain names, you can configure your environment where your domain names are unique we support that model.


Patching and Upgrades

Q: Patching: rolling upgrades, handling of one-off-patches and overlays, CPUs, etc.

A: Patches are applied using OPatch and rolled out in the following fashion:


Security

Q: Certificates and CA provisioning and Containers

A: The WebLogic Kubernetes Operator will generate self-signed certificates based on the subject alternative names that you provide and we will automatically configure the key stores, etc. for SSL. The alternative is you can provide your own certificates that you get from a CA or that you create.


Diagnostics and Logging

Q: Integration with Ecosystems: logging, monitoring (OS, JVM and Application level), etc.

A: WebLogic logs are persisted to an external volume. We are working on a project to integrate WebLogic Server logs with the Elastic Stack.

With regards to monitoring all the tools that are traditionally used to monitor WebLogic can still be used in Docker and Kubernetes. In addition as mentioned above we have developed the WebLogic Monitoring Exporter which exports WebLogic metrics in a format that can be read and displayed in dashboards like Prometheus and Grafana.


High Availability Considerations

Q: Business Continuity: Multi K8s setup (and Multi DataCenter setup), Hi-availability and Disaster Recovery for Kubernetes’ store

A: This is in our roadmap for WebLogic on Kubernetes.


Clone this wiki locally