Skip to content
This repository has been archived by the owner on Apr 8, 2022. It is now read-only.

Enhance Config (.kubewatch.yaml) for granular alerting #176

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

codenio
Copy link
Contributor

@codenio codenio commented Jun 7, 2019

This PR

  • Adds "Event" config section in the .kubewatch.yaml file for granular alerting
  • Makes Event config optional
  • Makes Resource config optional for backward compatibility
  • Enables configuration of alerts either using the "Resource" config or "Event" Config
  • Renames "Services" option to "Service" option in .kubewatch.yaml file
  • Populates Namespace details from events key, if the Namespace filed is empty

Upon Merging

  • Enables the user to customize alerts based on the Event configuration provided.
    eg:
    pod - alerts on creation and deletion events can be configured.
    svc - alerts on deletion can be configured individually.

Example Configs:

Using Resource Config:

$ cat ~/.kubewatch.yaml
handler:
  slack:
    token: xoxb-xxxxxxxxx-yyyyyyyyyy
    channel: kube-watch-test
resource:
  deployment: false
  replicationcontroller: false
  replicaset: false
  daemonset: false
  service: true
  pod: true
  job: false
  persistentvolume: false
  namespace: false
  secret: false
  configmap: false
  ingress: false
namespace: ""

Using Resource Config Sends all events to the specified channel.
Note: This section is kept unchanged for backward compatibility.

Using Event Config:

$ cat ~/.kubewatch.yaml 
handler:
  slack:
    token: xoxb-xxxxx-yyyyyyy
    channel: kube-watch-test

event:
    global:                       // global section alerts for all events
     - pod
     - deployment
    create:                       // create section alerts for resource object creation
     - service
    update:                       // update section for resource object updation
     - 
    delete:                       // delete section alerts for resource object deletion
     - job
     - service

namespace: ""

Using Event Config

  • all events for pod and deployment
  • create and delete events for service
  • delete events for job

will be sent to the specified channel.

This PR addresses Issue #105 and #163.

I would like to discuss on this PR and get it merged with peer reviews and feedbacks.

@codenio
Copy link
Contributor Author

codenio commented Jun 7, 2019

@tylerauerbeck, @jbianquetti-nami, @jjo
do help with your views, comments, and feedbacks.

@codenio
Copy link
Contributor Author

codenio commented Jun 17, 2019

We could discuss on this feature (when people are free) for improvements like,

Altering the current config suitably, would support customisation.

@bitsofinfo
Copy link

def want this + label selectors to further target specific things to monitor

@codenio
Copy link
Contributor Author

codenio commented Jul 10, 2019

Remider Ping
Could this be reviewed and merged..?

@codenio
Copy link
Contributor Author

codenio commented Oct 11, 2019

  • annotation based alerting
  • alerts on critical events like node reboot ,node not ready

Altering the current config suitably, would support customisation.

FYI, These features are now avalilable with Botkube (0.9.0)
People in need can make use of it. 👍

@davidegiunchi
Copy link

Botkube lack other features (like Teams integration), i hope that this PR will be merged since i think that it will be useful for a lot of users: now even on small k8s deployment, a small activity like a deployment or a scaleup, create a lot of notifications (6 or more) ... this makes the notifications unreadable and mute-prone

@codenio
Copy link
Contributor Author

codenio commented Jul 3, 2020

rebased into master

@mkmik
Copy link

mkmik commented Jul 6, 2020

conflicts

@codenio
Copy link
Contributor Author

codenio commented Jul 7, 2020

resolved conflicts and rebased into the master

@codenio
Copy link
Contributor Author

codenio commented Jul 8, 2020

@mkmik rebased into master

This commit

- Adds "Event" config section in the .kubewatch.yaml file for granular alerting
- Makes Event config optional
- Makes Resource config optional for backward compatibility
- Enables configuration of alerts either using the "Resource" config or "Event" Config
- Renames "Services" option to "Service" option in .kubewatch.yaml file
- Populates Namespace details from events key, if the Namespace filed is empty

Upon Merging

- Enables the user to customize alerts based on the Event configuration provided.
  eg: pod - alerts on creation and deletion events can be configured.
      svc - alerts on deletion can be configured individually.
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
cmd/resource.go Outdated Show resolved Hide resolved
config/config.go Outdated
Resource Resource `json:"resource"`

// For watching specific namespace, leave it empty for watching all.
Resource Resource `json:"resource,omitempty"`
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that in your rebase+conflict resolutions you reverted the comments.

in #233 I added a feature that generates a "sample" configuration file annotated with doc comments:

$ kubewatch config sample

For this to work, the Config Go type (and all structs used for its value) must have meaningful comments, following the godoc rules, namely if a comment directly precedes a field declaration, it's considered a field documentation comment.

this is why the commented out //Reason.. statement has been separated from the Resource field by an empty line.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you be specific about the change to correct this

config/config_test.go Outdated Show resolved Hide resolved
This commit, 
- attempts to resolve the reviews
- replaces Service with Services to follow convention
@ypicard
Copy link

ypicard commented Nov 19, 2021

Any news on this?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants