diff --git a/README.md b/README.md index 7d21324..d7d9f4f 100644 --- a/README.md +++ b/README.md @@ -256,6 +256,20 @@ docker pull prom/node-exporter docker run -d -p 9100:9100 --name node-exporter prom/node-exporter ``` +### Blob scraper + +The [eigenda-blob-scraper](https://github.com/NethermindEth/eigenda-blob-scrapper) is an application developed by Nethermind that gets the blob data in a JSON form from an EigenDA public endpoint and transforms it into Prometheus metrics every `FETCH_INTERVAL` seconds. The metrics are then exposed via a Prometheus server at port 9600. + +The tool is deployed via Docker image to DockerHub. The image can be found here: https://hub.docker.com/repository/docker/nethermind/eigenda-blob-scraper. + +The scraper is included in the docker-compose file and must be added as a prometheus target in the prometheus.yml file as shown in the `monitoring/prometheus.yml` file. + +You can use the following PromQL query to see if there has been requested blobs in the last 10 minutes: + +``` +(requested_at{} - (time() - (10 * 60))) > 0 and (requested_at{} - time()) <= 0 +``` + ## Troubleshooting * If you see the following error: ``` diff --git a/docker-compose.yml b/docker-compose.yml index cfc88b0..5c831ce 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -36,6 +36,16 @@ services: - "${NODE_LOG_PATH_HOST}:/app/logs:rw" - "${NODE_DB_PATH_HOST}:/data/operator/db:rw" restart: unless-stopped + scraper: + image: nethermind/eigenda-blob-scraper:latest + expose: + - "9600" + networks: + - eigenda + environment: + - FETCH_INTERVAL=60 + - API_URL=https://blobs-goerli.eigenda.xyz/api/trpc/blobs.getBlobs + restart: unless-stopped networks: eigenda: name: ${NETWORK_NAME} diff --git a/monitoring/prometheus.yml b/monitoring/prometheus.yml index 17ce65d..9cf36bf 100644 --- a/monitoring/prometheus.yml +++ b/monitoring/prometheus.yml @@ -9,4 +9,9 @@ scrape_configs: - job_name: 'node' static_configs: - - targets: ['node-exporter:9100'] \ No newline at end of file + - targets: ['node-exporter:9100'] + + - job_name: 'scraper' + scrape_interval: 1m + static_configs: + - targets: ['scraper:9600'] \ No newline at end of file