fisrt implementation of A/R Caclulation on Flink
New features/Enhancements
This is the 1st release of ARGO-Streaming which replicates the current A/R Calculation funcionality on Flink instead of Hadoop
ARGO-614 AMS subscriber stream to Hbase
ARGO-625 Python client to publish consumer metric data to AMS
ARGO-648 Establish rate when pulling messages
ARGO-668 Add streaming job for status latest results
ARGO-720 Store status results in hbase
ARGO-721 Add monitored and processed timestamps
ARGO-727 Add monitoring host to status event schema
ARGO-769 Ability to store raw data in hdfs
ARGO-798 Generate status events at the start of a new day
ARGO-768 Add Foundation class for Status Batch Job
ARGO-808 Implement endpoint status calculation step in status batch job
ARGO-809 Implement Calculation of Service Status timelines
ARGO-810 Implement Calculation of Endpoint Group Status Timelines
ARGO-825 Add Mon-engine exclusion mechanism to status batch-job
ARGO-828 Refactor AMS Subscriber to support event replayability using offset management
ARGO-893 Implement Weights Manager. Implement Downtime Manager
ARGO-895 Create Metric Timelines
ARGO-896 Create endpoint timelines
ARGO-897 Create Service Timelines
ARGO-898 Create Endpoint Group Timelines
ARGO-899 Calculate Service AR results in flink job
ARGO-900 Calculate Endpoint Group AR results in flink job
ARGO-901 Create Service A/R output format to datastore
ARGO-902 Create Endpoint Group A/R output format to datastore
Fixes
ARGO-229 Dependency Fix
ARGO-800 Remove unused kafka cli arguments
Documentation updates
a 1st itteration of the documentation was created.