Skip to content
This repository has been archived by the owner on Nov 15, 2022. It is now read-only.

Add healthcheck endpoint #102

Closed
wants to merge 15 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .ruby-version
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
2.7.1
1 change: 1 addition & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ RUN apk del curl ca-certificates wget

COPY fluentd /fluentd
COPY create-conf.rb /create-conf.rb
COPY conf-utils.rb /conf-utils.rb
COPY start.sh /start.sh

USER root
Expand Down
2 changes: 2 additions & 0 deletions Gemfile
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,6 @@ gem 'fluent-plugin-grafana-loki'
gem 'fluent-plugin-remote_syslog'
gem 'fluent-plugin-elasticsearch'
gem 'fluent-plugin-bigquery'
gem 'fluent-plugin-prometheus'
gem 'fluent-plugin-http-healthcheck'
gem 'test-unit'
7 changes: 7 additions & 0 deletions Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,8 @@ GEM
fluentd (>= 0.14.22)
fluent-plugin-grafana-loki (1.2.18)
fluentd (>= 1.9.3, < 2)
fluent-plugin-http-healthcheck (0.1.0)
fluentd (>= 0.14.10, < 2)
fluent-plugin-kafka (0.17.5)
fluentd (>= 0.10.58, < 2)
ltsv
Expand All @@ -119,6 +121,9 @@ GEM
fluent-plugin-mongo (1.5.0)
fluentd (>= 0.14.22, < 2)
mongo (~> 2.6.0)
fluent-plugin-prometheus (2.0.3)
fluentd (>= 1.9.1, < 2)
prometheus-client (>= 2.1.0)
fluent-plugin-remote_syslog (1.0.0)
fluentd
remote_syslog_sender (>= 1.1.1)
Expand Down Expand Up @@ -338,9 +343,11 @@ DEPENDENCIES
fluent-plugin-datadog
fluent-plugin-elasticsearch
fluent-plugin-grafana-loki
fluent-plugin-http-healthcheck
fluent-plugin-kafka
fluent-plugin-logzio
fluent-plugin-mongo
fluent-plugin-prometheus
fluent-plugin-remote_syslog
fluent-plugin-rewrite-tag-filter
fluent-plugin-s3
Expand Down
12 changes: 9 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,11 @@ The Log Export Container is a Docker Image you can use for spinning up multiple

1. Download the `docker-compose.yml` file from the Github repo onto your machine (or copy-paste its contents into a file you created directly on the machine with the same name).
- Make sure that the 'Required variables' in the .yml file are set appropriately based on your desired log format and output destination.
2. Run `sudo docker-compose up`
2. Run with your preferred container orchestrator (with docker, you can simply run `docker-compose up`)
3. Log into the strongDM Admin UI and go to the Settings page, then the Log Encryption & Storage tab.
4. Set "Log locally on relays?" to 'Yes'
5. Set "Local storage?" to "Syslog" and enter the IP address of the machine running the Log Export Container along with port 5140 ![image](https://user-images.githubusercontent.com/7840034/127934335-239b5e97-772c-4ac6-8e66-864ffaf4cccc.png)
5. Set "Local storage?" to "Syslog" and enter the IP address of the machine running the Log Export Container along with port 5140
- ![image](https://user-images.githubusercontent.com/7840034/127934335-239b5e97-772c-4ac6-8e66-864ffaf4cccc.png)
- Make sure that port 5140 on the machine hosting the container is accesible from your gateways. You can also host the container on your gateways themselves.
6. Set "Local format?" to match the input format you specified in the .yml file.
7. Click "Update" and you're done!
Expand All @@ -39,7 +40,12 @@ If you want to modify the container and quickly see the changes in your local, y
./dev-tools/start-container.sh
```

You could also run the project in your local without docker, please refer to [CONFIGURE_LOCAL_ENV](docs/CONFIGURE_LOCAL_ENV.md)
You could also run the project in your local without docker, please refer to [CONFIGURE_LOCAL_ENV](docs/deploy_log_export_container/CONFIGURE_LOCAL_ENV.md)

## Monitoring
Currently the application supports Prometheus Metrics about the received and forwarded logs. For more details, please see [CONFIGURE_PROMETHEUS](docs/monitoring/CONFIGURE_PROMETHEUS.md)

There is also a health check endpoint available at `http://localhost:24322`. It returns a "200 OK" HTTP response with an empty body if it is healthy.

## Contributing
Refer to the [contributing](CONTRIBUTING.md) guidelines or dump part of the information here.
Expand Down
131 changes: 131 additions & 0 deletions conf-utils.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@

SUPPORTED_STORES="stdout remote-syslog s3 cloudwatch splunk-hec datadog azure-loganalytics sumologic kafka mongo logz loki elasticsearch bigquery"
AUDIT_ENTITY_TYPES = {
"resources" => "resource",
"users" => "user",
"roles" => "role",
}

def extract_value(str)
unless str
str = ""
end
str.gsub(/ /, "").downcase
end

def extract_entity_interval(entity, default_interval)
treated_entity_list = ENV['LOG_EXPORT_CONTAINER_EXTRACT_AUDIT'].to_s.match /#{entity}\/+(\d+)/
if treated_entity_list != nil
interval = treated_entity_list[1]
else
interval = default_interval
end
"#{interval}m"
end

def extract_activity_interval
if ENV['LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES_INTERVAL'] != nil
interval = "#{ENV['LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES_INTERVAL']}m"
else
interval = extract_entity_interval("activities", "15")
end
interval
end

def monitoring_conf
monitoring_enabled = extract_value(ENV['LOG_EXPORT_CONTAINER_ENABLE_MONITORING']) == "true"
if monitoring_enabled
File.read("#{ETC_DIR}/monitoring.conf")
end
end

def output_stores_conf
conf = ""
output_types = extract_value(ENV['LOG_EXPORT_CONTAINER_OUTPUT'])
stores = SUPPORTED_STORES.split(' ')
stores.each do |store|
if output_types.include?(store)
conf = "#{conf}#{store} "
end
end
if conf == ""
return "stdout"
end
conf
end

def input_conf
conf = extract_value(ENV['LOG_EXPORT_CONTAINER_INPUT'])
if conf != ""
filename = "#{ETC_DIR}/input-#{conf}.conf"
else
filename = "#{ETC_DIR}/input-syslog-json.conf"
end
stream_entity = extract_value(ENV["LOG_EXPORT_CONTAINER_STREAM_AUDIT_ENTITY"])
file = File.read(filename)
if conf == "file-json" && stream_entity != ""
file = file.gsub("\#{ENV['LOG_FILE_PATH']}", "/var/log/sdm-audit-#{stream_entity}.log")
end
file
end

def decode_chunk_events_conf
conf = extract_value(ENV['LOG_EXPORT_CONTAINER_INPUT'])
decode_chunks_enabled = extract_value(ENV['LOG_EXPORT_CONTAINER_DECODE_CHUNK_EVENTS']) == "true"
if (conf == "syslog-json" || conf == "tcp-json") && decode_chunks_enabled
File.read("#{ETC_DIR}/input-json-chunk.conf")
end
end

def input_extract_audit_activities_conf
extract_activities = extract_value(ENV['LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES'])
extracted_entities = extract_value(ENV['LOG_EXPORT_CONTAINER_EXTRACT_AUDIT'])
unless extract_activities == "true" || extracted_entities.match(/activities/)
return
end
read_file = File.read("#{ETC_DIR}/input-extract-audit-activities.conf")
read_file['$interval'] = extract_activity_interval
read_file
end

def input_extract_audit_entity_conf(entity)
extract_audit = extract_value(ENV['LOG_EXPORT_CONTAINER_EXTRACT_AUDIT'])
unless extract_audit.match(/#{entity}/)
return
end
read_file = File.read("#{ETC_DIR}/input-extract-audit-entity.conf")
read_file['$tag'] = AUDIT_ENTITY_TYPES[entity]
read_file['$interval'] = extract_entity_interval(entity, "480")
read_file.gsub!("$entity", entity)
read_file
end

def default_classify_conf
conf = extract_value(ENV['LOG_EXPORT_CONTAINER_INPUT'])
if conf == "syslog-csv" || conf == "tcp-csv" || conf == "file-csv"
filename = "#{ETC_DIR}/classify-default-csv.conf"
else
filename = "#{ETC_DIR}/classify-default-json.conf"
end
File.read(filename)
end

def custom_classify_conf
conf = extract_value(ENV['LOG_EXPORT_CONTAINER_INPUT'])
if conf == "syslog-csv"
File.read("#{ETC_DIR}/classify-syslog-csv.conf")
elsif conf == "tcp-csv" || conf == "file-csv"
File.read("#{ETC_DIR}/classify-tcp-csv.conf")
end
end

def output_conf
output_content = []
stores = output_stores_conf.split(' ')
stores.each do |store|
output_content << File.read("#{ETC_DIR}/output-#{store}.conf")
end
template = File.read("#{ETC_DIR}/output-template.conf")
template["$stores"] = output_content.join("")
template
end
83 changes: 6 additions & 77 deletions create-conf.rb
Original file line number Diff line number Diff line change
@@ -1,90 +1,19 @@

ETC_DIR="#{ENV['FLUENTD_DIR']}/etc"
SUPPORTED_STORES="stdout remote-syslog s3 cloudwatch splunk-hec datadog azure-loganalytics sumologic kafka mongo logz loki elasticsearch bigquery"

def extract_value(str)
unless str
str = ""
end
str.gsub(/ /, "").downcase
end

def output_stores_conf
conf = ""
output_types = extract_value(ENV['LOG_EXPORT_CONTAINER_OUTPUT'])
stores = SUPPORTED_STORES.split(' ')
stores.each do |store|
if output_types.include?(store)
conf = "#{conf}#{store} "
end
end
if conf == ""
return "stdout"
end
conf
end

def input_conf
conf = extract_value(ENV['LOG_EXPORT_CONTAINER_INPUT'])
if conf != ""
filename = "#{ETC_DIR}/input-#{conf}.conf"
else
filename = "#{ETC_DIR}/input-syslog-json.conf"
end
File.read(filename)
end

def decode_chunk_events_conf
conf = extract_value(ENV['LOG_EXPORT_CONTAINER_INPUT'])
decode_chunks_enabled = extract_value(ENV['LOG_EXPORT_CONTAINER_DECODE_CHUNK_EVENTS']) == "true"
if (conf == "syslog-json" || conf == "tcp-json") && decode_chunks_enabled
File.read("#{ETC_DIR}/input-json-chunk.conf")
end
end

def input_extract_audit_activities_conf
extract_enabled = extract_value(ENV['LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES']) == "true"
if extract_enabled
File.read("#{ETC_DIR}/input-extract-audit-activities.conf")
end
end

def default_classify_conf
conf = extract_value(ENV['LOG_EXPORT_CONTAINER_INPUT'])
if conf == "syslog-csv" || conf == "tcp-csv" || conf == "file-csv"
filename = "#{ETC_DIR}/classify-default-csv.conf"
else
filename = "#{ETC_DIR}/classify-default-json.conf"
end
File.read(filename)
end

def custom_classify_conf
conf = extract_value(ENV['LOG_EXPORT_CONTAINER_INPUT'])
if conf == "syslog-csv"
File.read("#{ETC_DIR}/classify-syslog-csv.conf")
elsif conf == "tcp-csv" || conf == "file-csv"
File.read("#{ETC_DIR}/classify-tcp-csv.conf")
end
end

def output_conf
output_content = []
stores = output_stores_conf.split(' ')
stores.each do |store|
output_content << File.read("#{ETC_DIR}/output-#{store}.conf")
end
template = File.read("#{ETC_DIR}/output-template.conf")
template["$stores"] = output_content.join("")
template
end
require_relative './conf-utils'

def create_file
File.open("#{ETC_DIR}/fluent.conf", "w") do |f|
f.write(input_conf)
f.write(input_extract_audit_activities_conf)
f.write(monitoring_conf)
f.write(input_extract_audit_entity_conf("resources"))
f.write(input_extract_audit_entity_conf("users"))
f.write(input_extract_audit_entity_conf("roles"))
f.write(default_classify_conf)
f.write(custom_classify_conf)
f.write(File.read("#{ETC_DIR}/healthcheck.conf"))
f.write(File.read("#{ETC_DIR}/process.conf"))
f.write(decode_chunk_events_conf)
f.write(output_conf)
Expand Down
16 changes: 16 additions & 0 deletions dev-tools/grafana/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
FROM grafana/grafana

ENV GF_AUTH_DISABLE_LOGIN_FORM="true"
ENV GF_AUTH_ANONYMOUS_ENABLED="true"
ENV GF_AUTH_ANONYMOUS_ORG_ROLE="Admin"

COPY grafana.ini /etc/grafana/
COPY dashboard.sql /

USER root
RUN apk add sqlite

RUN sqlite3 /var/lib/grafana/grafana.db < /dashboard.sql
RUN rm /dashboard.sql

USER grafana
Loading