Skip to content

Commit

Permalink
Merge pull request #33 from wso2/v5.5.x
Browse files Browse the repository at this point in the history
Merge 5.5.x to master branch
  • Loading branch information
msmshariq authored Jun 14, 2018
2 parents 7ca8931 + dd1df63 commit dd5c2f0
Show file tree
Hide file tree
Showing 120 changed files with 16,048 additions and 1 deletion.
101 changes: 101 additions & 0 deletions helm/is-with-analytics/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
# Helm Charts for deployment of WSO2 Identity Server with Analytics

## Prerequisites

* In order to use these Kubernetes resources, you will need an active [Free Trial Subscription](https://wso2.com/free-trial-subscription)
from WSO2 since the referring Docker images hosted at docker.wso2.com contains the latest updates and fixes for WSO2 Enterprise Identity Server.
You can sign up for a Free Trial Subscription [here](https://wso2.com/free-trial-subscription).<br><br>

* Install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git), [Helm](https://github.com/kubernetes/helm/blob/master/docs/install.md)
(and Tiller) and [Kubernetes client](https://kubernetes.io/docs/tasks/tools/install-kubectl/) in order to run the
steps provided in the following quick start guide.<br><br>

* Install [NGINX Ingress Controller](https://kubernetes.github.io/ingress-nginx/deploy/). This can
be easily done via
```
helm install stable/nginx-ingress --name nginx-wso2is-analytics --set rbac.create=true
```
## Quick Start Guide
>In the context of this document, <br>
>* `KUBERNETES_HOME` will refer to a local copy of the [`wso2/kubernetes-is`](https://github.com/wso2/kubernetes-is/)
Git repository. <br>
>* `HELM_HOME` will refer to `<KUBERNETES_HOME>/helm/is-with-analytics`. <br>
##### 1. Checkout Kubernetes Resources for WSO2 Identity server Git repository:

```
git clone https://github.com/wso2/kubernetes-is.git
```

##### 2. Provide configurations:

1. The default product configurations are available at `<HELM_HOME>/is-with-analytics-conf/confs` folder. Change the
configurations as necessary.

2. Open the `<HELM_HOME>/is-with-analytics-conf/values.yaml` and provide the following values.

`username`: Username of your Free Trial Subscription<br>
`password`: Password of your Free Trial Subscription<br>
`email`: Docker email<br>
`namespace`: Namespace<br>
`svcaccount`: Service Account<br>
`serverIp`: NFS Server IP<br>
`locationPath`: NFS location path<br>
`sharedDeploymentLocationPath`: NFS shared deployment directory(<IS_HOME>/repository/deployment) location for IS<br>
`sharedTentsLocationPath`: NFS shared tenants directory(<IS_HOME>/repository/tenants) location for IS<br>
`analytics1DataLocationPath`: NFS volume for Indexed data for Analytics node 1(<DAS_HOME>/repository/data)<br>
`analytics2DataLocationPath`: NFS volume for Indexed data for Analytics node 2(<DAS_HOME>/repository/data)

3. Open the `<HELM_HOME>/is-with-analytics-deployment/values.yaml` and provide the following values.

`namespace`: Namespace<br>
`svcaccount`: Service Account

##### 3. Deploy the configurations:

```
helm install --name <RELEASE_NAME> <HELM_HOME>/is-with-analytics-conf
```

##### 4. Deploy MySql:
If there is an external product database(s), add those configurations as stated at `step 2.1`. Otherwise, run the below
command to create the product database.
```
helm install --name wso2is-with-analytics-rdbms-service -f <HELM_HOME>/mysql/values.yaml
stable/mysql --namespace <NAMESPACE>
```
`NAMESPACE` should be same as `step 2.2`.

##### 5. Deploy WSO2 Enterprise Identity server:

```
helm install --name <RELEASE_NAME> <HELM_HOME>/is-with-analytics-deployment
```

##### 6. Access Management Console:

Default deployment will expose two publicly accessible hosts, namely:<br>
1. `wso2is` - To expose Administrative services and Management Console<br>
2. `wso2is-analytics` - To expose Analytics server<br>

To access the console in a test environment,

1. Obtain the external IP (`EXTERNAL-IP`) of the Ingress resources by listing down the Kubernetes Ingresses (using `kubectl get ing`).

e.g.

```
NAME HOSTS ADDRESS PORTS AGE
wso2is-with-analytics-is-analytics-ingress wso2is-analytics <EXTERNAL-IP> 80, 443 9m
wso2is-with-analytics-is-ingress wso2is <EXTERNAL-IP> 80, 443 9m
```

2. Add the above two hosts as entries in /etc/hosts file as follows:

```
<EXTERNAL-IP> wso2is
<EXTERNAL-IP> wso2is-analytics
```

3. Try navigating to `https://wso2is/carbon` from your favorite browser.

35 changes: 35 additions & 0 deletions helm/is-with-analytics/is-with-analytics-conf/.helmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Copyright (c) 2018, WSO2 Inc. (http://www.wso2.org) All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
19 changes: 19 additions & 0 deletions helm/is-with-analytics/is-with-analytics-conf/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Copyright (c) 2018, WSO2 Inc. (http://www.wso2.org) All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
appVersion: "1.0"
description: A Helm chart for the deployment of WSO2 IS-Analytics configurations
name: is-conf
version: 1.0.0
10 changes: 10 additions & 0 deletions helm/is-with-analytics/is-with-analytics-conf/auth.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
{
"auths": {
"docker.wso2.com": {
"username": "docker.wso2.com.username",
"password": "docker.wso2.com.password",
"email": "docker.wso2.com.email",
"auth": "docker.wso2.com.auth"
}
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
wso2is-analytics-1
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
# ------------------------------------------------------
# CARBON RELATED SPARK PROPERTIES
# ------------------------------------------------------
# Carbon specific properties when running Spark in the Carbon environment.
# Should start with the prefix "carbon."

# carbon.spark.master config has 3 states
# 1. (default) local mode - spark starts in the local mode (NOTE: carbon.spark.master.count property
# will not be considered here)
# ex: "carbon.spark.master local" or "carbon.spark.master local[2]"
# 2. client mode - DAS acts as a client for an external Spark cluster (NOTE: carbon.spark.master.count property
# will not be considered here)
# ex: "carbon.spark.master spark://<host name>:<port>"
# 3. cluster mode - DAS creates its own Spark cluster usign Carbon Clustering
# ex: "carbon.spark.master local" AND "carbon.spark.master.count <number of redundant masters>"

carbon.spark.master local
carbon.spark.master.count 2

#This configuration is used to limit the number of results returned from spark query execution
#To return all the results, set this to -1
carbon.spark.results.limit 1000

# Below configuratoin can be used to point to a symbolic link to WSO2 DAS HOME
# carbon.das.symbolic.link /home/ubuntu/das/das_symlink/

# Below configuration can be used with the spark fair scheduler, when fair schedule pools are used. the
# defualt pool name for carbon is 'carbon-pool'
# carbon.scheduler.pool carbon-pool



# ------------------------------------------------------
# SPARK PROPERTIES
# ------------------------------------------------------
# Default system properties included when running spark.
# This is useful for setting default environmental settings.
# Check http://spark.apache.org/docs/latest/configuration.html for further information

# Application (Spark Driver) Properties
# ------------------------------------------------------
spark.app.name CarbonAnalytics
# Spark Driver will be running inside the carbon JVM. Hence the below properties are obsolete
# spark.driver.cores 1
# spark.driver.memory 512m

# Runtime Environment
# ------------------------------------------------------

# Spark UI
spark.ui.port 4040
spark.history.ui.port 18080

# Compression and Serialization
spark.serializer org.apache.spark.serializer.KryoSerializer
spark.kryoserializer.buffer 256k
spark.kryoserializer.buffer.max 256m

# Execution Behavior

# Networking
spark.blockManager.port 12000
spark.broadcast.port 12500
spark.driver.port 13000
spark.executor.port 13500
spark.fileserver.port 14000
spark.replClassServer.port 14500
spark.akka.timeout 1000s

# Scheduling
spark.scheduler.mode FAIR
# this property can be set to specify where hte fairscheduler.xml file is. the carbon specific
# fairscheduler.xml is in the <DAS_HOME>/repository/conf/analytics/spark directory
# spark.scheduler.allocation.file <DAS_HOME>/repository/conf/analytics/spark/fairscheduler.xml

# Dynamic Allocation

# Security

# Encryption

# Standalone Cluster Configs
spark.deploy.recoveryMode CUSTOM
spark.deploy.recoveryMode.factory org.wso2.carbon.analytics.spark.core.deploy.AnalyticsRecoveryModeFactory

# Master
spark.master.port 7077
spark.master.rest.port 6066
spark.master.webui.port 8081

# Worker
spark.worker.cores 1
spark.worker.memory 1g
spark.worker.dir work
spark.worker.port 11000
spark.worker.webui.port 11500

# Executors
# spark.executor.cores 1 ; Default: Takes all the available cores in the worker
spark.executor.memory 1g
spark.executor.logs.rolling.strategy size
spark.executor.logs.rolling.maxSize 10000000
spark.executor.logs.rolling.maxRetainedFiles 10

# spark.cores.max ; Default: Int.MAX_VALUE; The maximum amount of CPU cores to request for the application from across
# the cluster (not from each machine)


# Spark Logging
# ------------------------------------------------------
# To allow event logging for spark you need to uncomment
# the line spark.eventlog.log true and set the directory in which the
# logs will be stored.

# spark.eventLog.enabled true
# spark.eventLog.dir <PATH_FOR_SPARK_EVENT_LOGS>

# YARN related configs
# ------------------------------------------------------
# spark.yarn.jar <path to the spark-core_2.10_1.4.3.wso2v1.jar>


Loading

0 comments on commit dd5c2f0

Please sign in to comment.