diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md
index f7d7da66..6af1a564 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.md
+++ b/.github/ISSUE_TEMPLATE/bug_report.md
@@ -19,7 +19,7 @@ A clear and concise description of what you expected to happen.
**Please complete the following information about the solution:**
- [ ] Version: [e.g. v3.1]
-To get the version of the solution, you can look at the description of the created CloudFormation stack. For example, "AWS WAF Security Automations v3.1: This AWS CloudFormation template helps you provision the AWS WAF Security Automations stack without worrying about creating and configuring the underlying AWS infrastructure". If the description does not contain the version information, you can look at the mappings section of the template:
+To get the version of the solution, you can look at the description of the created CloudFormation stack. For example, "Security Automations for AWS WAF v3.1: This AWS CloudFormation template helps you provision the Security Automations for AWS WAF stack without worrying about creating and configuring the underlying AWS infrastructure". If the description does not contain the version information, you can look at the mappings section of the template:
```yaml
Mappings:
@@ -33,7 +33,7 @@ Mappings:
- [ ] Region: [e.g. us-east-1]
- [ ] Was the solution modified from the version published on this repository?
- [ ] If the answer to the previous question was yes, are the changes available on GitHub?
-- [ ] Have you checked your [service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) for the sevices this solution uses?
+- [ ] Have you checked your [service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) for the services this solution uses?
- [ ] Were there any errors in the CloudWatch Logs?
**Screenshots**
diff --git a/.gitignore b/.gitignore
index 8a22cf58..5fa75d34 100644
--- a/.gitignore
+++ b/.gitignore
@@ -10,8 +10,47 @@ source/tests/__pycache__/
source/log_parser/__pycache__/
deployment/global-s3-assets/
deployment/regional-s3-assets/
+source/**/idna**
+source/**/certifi**
+source/**/urllib**
+source/**/requests**
+source/**/backoff**
+source/**/charset**
+source/**/bin
+source/**/__pycache__
+source/**/.venv**
+source/**/test/__pycache__
+source/**/test/.pytest**
-# coverage
+
+
+
+
+# Unit test / coverage reports
**/coverage
**/package
-*coverage*
\ No newline at end of file
+*coverage
+source/test/coverage-reports/
+**/.venv-test
+
+# linting, scanning configurations, sonarqube
+.scannerwork/
+
+# Third-party dependencies
+backoff*
+bin
+boto3*
+botocore*
+certifi*
+charset*
+dateutil*
+idna*
+jmespath*
+python_*
+requests*
+s3transfer*
+six*
+urllib*
+
+# Ignore lib folder within each lambada folder. Only include lib folder at upper level
+/source/**/lib
\ No newline at end of file
diff --git a/CHANGELOG.md b/CHANGELOG.md
index c43ec5a7..044d1deb 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,55 +1,113 @@
# Changelog
+
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+## [4.0.0] - 2023-05-11
+
+### Added
+
+- Added support for 10 new AWS Managed Rules rule groups (AMR)
+- Added support for country and URI configurations in HTTP Flood Athena log parser
+- Added support for user-defined S3 prefix for application access log bucket
+- Added support for CloudWatch log retention period configuration
+- Added support for multiple solution deployments in the same account and region
+- Added support for exporting CloudFormation stack output values
+- Replaced the hard coded amazonaws.com with {AWS::URLSuffix} in BadBotHoneypot API endpoint
+
+### Fixed
+
+- Avoid account-wide API Gateway logging setting change by deleting the solution stack [GitHub issue 213](https://github.com/aws-solutions/aws-waf-security-automations/issues/213)
+- Avoid creating a new logging bucket for an existing app access log bucket that already has logging enabled
+
## [3.2.5] - 2023-04-18
+
### Patched
+
- Patch s3 logging bucket settings
- Updated the timeout for requests
+
## [3.2.4] - 2023-02-06
+
### Changed
+
- Upgraded pytest to mitigate CVE-2022-42969
- Upgraded requests and subsequently certifi to mitigate CVE-2022-23491
+
## [3.2.3] - 2022-12-13
+
### Changed
+
- Add region as prefix to application attribute group name to avoid conflict with name starting with AWS.
+
## [3.2.2] - 2022-12-05
+
### Added
+
- Added AppRegistry integration
+
## [3.2.1] - 2022-08-30
+
### Added
+
- Added support for configuring oversize handling for requests components
- Added support for configuring sensitivity level for SQL injection rule
+
## [3.2] - 2021-09-22
+
### Added
+
- Added IP retention support on Allowed and Denied IP Sets
+
### Changed
+
- Bug fixes
+
## [3.1] - 2020-10-22
+
### Changed
+
- Replaced s3 path-style with virtual-hosted style
- Added partition variable to all ARNs
- Updated bug report
+
## [3.0] - 2020-07-08
+
### Added
+
- Added an option to deploy AWS Managed Rules for WebACL on installation
+
### Changed
+
- Upgraded from WAF classic to WAFV2 API
- Eliminated dependency on NodeJS and use Python as the standardized programming language
+
## [2.3.3] - 2020-06-15
+
### Added
+
- Implemented Athena optimization: added partitioning for CloudFront, ALB and WAF logs and Athena queries
+
### Changed
+
- Fixed potential DoS vector within Bad Bots X-Forward-For header
+
## [2.3.2] - 2020-02-05
+
### Added
+
### Changed
+
- Fixed README file to accurately reflect script params
- Upgraded from Python 3.7 to 3.8
- Changed RequestThreshold min limit from 2000 to 100
+
## [2.3.1] - 2019-10-30
+
### Added
+
### Changed
+
- Fixed error handling of intermittent issue: (WAFStaleDataException) when calling the UpdateWebACL
- Upgrade from Node 8 to Node 10 for Lambda function
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 49734c60..18e3d08f 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -11,7 +11,7 @@ information to effectively respond to your bug report or contribution.
We welcome you to use the GitHub issue tracker to report bugs or suggest features.
-When filing an issue, please check [existing open](https://github.com/awslabs/aws-waf-security-automations/issues), or [recently closed](https://github.com/awslabs/aws-waf-security-automations/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already
+When filing an issue, please check [existing open](https://github.com/aws-solutions/aws-waf-security-automations/issues), or [recently closed](https://github.com/aws-solutions/aws-waf-security-automations/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already
reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:
* A reproducible test case or series of steps
@@ -41,8 +41,7 @@ GitHub provides additional document on [forking a repository](https://help.githu
## Finding contributions to work on
-Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels ((enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/awslabs/aws-waf-security-automations/labels/help%20wanted) issues is a great place to start.
-
+Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/aws-solutions/aws-waf-security-automations/labels/help%20wanted) issues is a great place to start.
## Code of Conduct
This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
@@ -56,6 +55,6 @@ If you discover a potential security issue in this project we ask that you notif
## Licensing
-See the [LICENSE](https://github.com/awslabs/aws-waf-security-automations/blob/master/LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution.
+See the [LICENSE](https://github.com/aws-solutions/aws-waf-security-automations/blob/master/LICENSE.txt) file for our project's licensing. We will ask you to confirm the licensing of your contribution.
We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes.
diff --git a/NOTICE.txt b/NOTICE.txt
index a8ef6cf8..2c9bd780 100644
--- a/NOTICE.txt
+++ b/NOTICE.txt
@@ -12,9 +12,52 @@ THIRD PARTY COMPONENTS
**********************
This software includes third party software subject to the following copyrights:
-async under the Massachusetts Institute of Technology (MIT) license
-sax under the Internet Systems Consortium (ISC) license
-xml2js under the Massachusetts Institute of Technology (MIT) license
-xmlbuilder under the Massachusetts Institute of Technology (MIT) license
-requests under the Apache Software License
freezegun under the Apache Software License
+boto3 under the Apache Software License
+botocore under the Apache Software License
+Mock under the BDS License
+moto under the Apache Software License
+pytest under the MIT License
+pytest-mock under the MIT License
+pytest-cov under the MIT License
+pytest-env under the MIT License
+pyparsing under the MIT License
+pytest-runner under the MIT License
+uuid under the MIT License
+backoff under the MIT License
+requests under the Apache Software License
+certifi under the Mozilla Public License
+charset_normalizer under the Apache Software License
+python-dateutil under the Apache Software License and BSD License
+inda under the BSD License
+urllib3 under the MIT License
+jmespath under the MIT License
+s3transfer under the Apache Software License
+cryptography under the Apache Software License and BSD License
+Werkzeug under the BSD-3-Clause
+xmltodict under the MIT License
+responses under the Apache-2.0
+Jinja2 under the BSD License
+pycparser under the BSD License
+pyyaml under the MIT License
+attrs under the MIT License
+pluggy under the MIT License
+iniconfig under the MIT License
+exceptiongroup under the MIT License
+packaging under the Apache Software License and BSD License
+tomli under the MIT License
+coverage under the Apache Software License
+cffi under the MIT License
+six under the MIT License
+types-PyYAML under the Apache Software License
+MarkupSafe under the BSD-3-Clause
+
+
+
+
+
+
+
+
+
+
diff --git a/README.md b/README.md
index b5cb93d9..c3a3d848 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,4 @@
-**[🚀 Solution Landing Page](https://aws.amazon.com/solutions/implementations/aws-waf-security-automations)** | **[🚧 Feature request](https://github.com/awslabs/aws-waf-security-automations/issues/new?assignees=&labels=feature-request%2C+enhancement&template=feature_request.md&title=)** | **[🐛 Bug Report](https://github.com/awslabs/aws-waf-security-automations/issues/new?assignees=&labels=bug%2C+triage&template=bug_report.md&title=)**
+**[🚀 Solution Landing Page](https://aws.amazon.com/solutions/implementations/security-automations-for-aws-waf)** | **[🚧 Feature request](https://github.com/aws-solutions/aws-waf-security-automations/issues/new?assignees=&labels=feature-request%2C+enhancement&template=feature_request.md&title=)** | **[🐛 Bug Report](https://github.com/aws-solutions/aws-waf-security-automations/issues/new?assignees=&labels=bug%2C+triage&template=bug_report.md&title=)**
Note: If you want to use the solution without building from source, navigate to Solution Landing Page
@@ -15,6 +15,7 @@ Note: If you want to use the solution without building from source, navigate to
- [License](#license)
+
# Solution Overview
The Security Automations for AWS WAF solution is a reference implementation that automatically deploys a set of AWS WAF (web application firewall) rules that filter common web-based attacks. Users can select from preconfigured protective features that define the rules included in an AWS WAF web access control list (web ACL). Once deployed, AWS WAF protects your Amazon CloudFront distributions or Application Load Balancers by inspecting web requests.
@@ -23,9 +24,10 @@ You can use AWS WAF to create custom, application-specific rules that block atta
This solution can be easily installed in your AWS accounts via launching the provided AWS CloudFormation template.
-For a detailed solution implementation guide, refer to Solution Landing Page [Security Automations for AWS WAF](https://aws.amazon.com/solutions/implementations/aws-waf-security-automations)
+For a detailed solution implementation guide, refer to Solution Landing Page [Security Automations for AWS WAF](https://aws.amazon.com/solutions/implementations/security-automations-for-aws-waf)
+
# Architecture Diagram
@@ -50,15 +52,18 @@ IP Reputation Lists (H): This component is the IP Lists Parser AWS Lambda functi
Bad Bots (I): This component automatically sets up a honeypot, which is a security mechanism intended to lure and deflect an attempted attack.
+
# Customizing the Solution
+
## Prerequisites for Customization
-* [AWS Command Line Interface](https://aws.amazon.com/cli/)
-* Python 3.8
+- [AWS Command Line Interface](https://aws.amazon.com/cli/)
+- Python 3.10
+
## Build
Building from GitHub source will allow you to modify the solution, such as adding custom actions or upgrading to a new release. The process consists of downloading the source from GitHub, creating Amazon S3 buckets to store artifacts for deployment, building the solution, and uploading the artifacts to S3 in your account.
@@ -70,15 +75,17 @@ Clone or download the repository to a local directory on your linux client. Note
**Git Clone example:**
```
-git clone https://github.com/awslabs/aws-waf-security-automations.git
+git clone https://github.com/aws-solutions/aws-waf-security-automations.git
```
**Download Zip example:**
+
```
-wget https://github.com/awslabs/aws-waf-security-automations/archive/master.zip
+wget https://github.com/aws-solutions/aws-waf-security-automations/archive/master.zip
```
#### 2. Unit test
+
Next, run unit tests to make sure your customized code passes the tests
```
@@ -91,11 +98,12 @@ chmod +x ./run-unit-tests.sh
AWS Solutions use two buckets:
-* One global bucket that is access via the http end point. AWS CloudFormation templates are stored here. Ex. "mybucket"
-* One regional bucket for each region where you plan to deploy the solution. Use the name of the global bucket as the prefix of the bucket name, and suffixed with the region name. Regional assets such as Lambda code are stored here. Ex. "mybucket-us-east-1"
-* The assets in buckets must be accessible by your account
+- One global bucket that is access via the http end point. AWS CloudFormation templates are stored here. Ex. "mybucket"
+- One regional bucket for each region where you plan to deploy the solution. Use the name of the global bucket as the prefix of the bucket name, and suffixed with the region name. Regional assets such as Lambda code are stored here. Ex. "mybucket-us-east-1"
+- The assets in buckets must be accessible by your account
#### 4. Declare enviroment variables
+
```
export TEMPLATE_OUTPUT_BUCKET= # Name of the global bucket where CloudFormation templates are stored
export DIST_OUTPUT_BUCKET= # Name for the regional bucket where regional assets are stored
@@ -103,29 +111,36 @@ export SOLUTION_NAME= # name of the solution.
export VERSION= # version number for the customized code
export AWS_REGION= # region where the solution is deployed
```
+
#### 5. Build the solution
+
```
cd /deployment
chmod +x ./build-s3-dist.sh && ./build-s3-dist.sh $TEMPLATE_OUTPUT_BUCKET $DIST_OUTPUT_BUCKET $SOLUTION_NAME $VERSION
```
+
## Upload deployment assets
+
```
aws s3 cp ./deployment/global-s3-assets s3://$TEMPLATE_OUTPUT_BUCKET/$SOLUTION_NAME/$VERSION --recursive --acl bucket-owner-full-control
aws s3 cp ./deployment/regional-s3-assets s3://$DIST_OUTPUT_BUCKET-$AWS_REGION/$SOLUTION_NAME/$VERSION --recursive --acl bucket-owner-full-control
```
+
#### _Note:_ You must use proper acl and profile for the copy operation as applicable. Using randomized bucket names is recommended.
+
## Deploy
-* From your designated Amazon S3 bucket where you uploaded the deployment assets, copy the link location for the aws-waf-security-automations.template.
-* Using AWS CloudFormation, launch the Security Automations for AWS WAF solution stack using the copied Amazon S3 link for the aws-waf-security-automations.template.
+- From your designated Amazon S3 bucket where you uploaded the deployment assets, copy the link location for the aws-waf-security-automations.template.
+- Using AWS CloudFormation, launch the Security Automations for AWS WAF solution stack using the copied Amazon S3 link for the aws-waf-security-automations.template.
#### _Note:_ When deploying the template for CloudFront endpoint, you can launch it only from us-east-1 region.
+
# File structure
This project consists of microservices that facilitate the functional areas of the solution. These microservices are deployed to a serverless environment in AWS Lambda.
@@ -145,11 +160,13 @@ This project consists of microservices that facilitate the functional areas of t
+
# Collection of operational metrics
-This solution collects anonymous operational metrics to help AWS improve the quality and features of the solution. For more information, including how to disable this capability, please see the [implementation guide](https://docs.aws.amazon.com/solutions/latest/aws-waf-security-automations/appendix-g.html).
+This solution collects anonymous operational metrics to help AWS improve the quality and features of the solution. For more information, including how to disable this capability, please see the [implementation guide](https://docs.aws.amazon.com/solutions/latest/security-automations-for-aws-waf/operational-metrics.html).
+
# License
-See license [here](https://github.com/awslabs/aws-waf-security-automations/blob/master/LICENSE.txt)
\ No newline at end of file
+See license [here](https://github.com/aws-solutions/aws-waf-security-automations/blob/master/LICENSE.txt)
diff --git a/deployment/aws-waf-security-automations-firehose-athena.template b/deployment/aws-waf-security-automations-firehose-athena.template
index 55f2c7cd..01cbf58c 100644
--- a/deployment/aws-waf-security-automations-firehose-athena.template
+++ b/deployment/aws-waf-security-automations-firehose-athena.template
@@ -1,4 +1,4 @@
-# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
@@ -13,8 +13,8 @@
AWSTemplateFormatVersion: 2010-09-09
Description: >-
- (SO0006-FA) - AWS WAF Security Automations - FA %VERSION%: This AWS CloudFormation template helps
- you provision the AWS WAF Security Automations stack without worrying about creating and
+ (SO0006-FA) - Security Automations for AWS WAF - FA %VERSION%: This AWS CloudFormation template helps
+ you provision the Security Automations for AWS WAF stack without worrying about creating and
configuring the underlying AWS infrastructure.
**WARNING** This template creates an AWS Lambda function, an AWS WAF Web ACL, an Amazon S3 bucket,
@@ -409,7 +409,7 @@ Resources:
Condition: AthenaLogParser
Properties:
Name: !Join ['-', ['WAFAddPartitionAthenaQueryWorkGroup', !Ref UUID]]
- Description: Athena WorkGroup for adding Athena partition queries used by AWS WAF Security Automations Solution
+ Description: Athena WorkGroup for adding Athena partition queries used by Security Automations for AWS WAF Solution
State: ENABLED
RecursiveDeleteOption: true
WorkGroupConfiguration:
@@ -419,8 +419,8 @@ Resources:
Type: AWS::Athena::WorkGroup
Condition: HttpFloodAthenaLogParser
Properties:
- Name: WAFLogAthenaQueryWorkGroup
- Description: Athena WorkGroup for WAF log queries used by AWS WAF Security Automations Solution
+ Name: !Join ['-', ['WAFLogAthenaQueryWorkGroup', !Ref UUID]]
+ Description: Athena WorkGroup for WAF log queries used by Security Automations for AWS WAF Solution
State: ENABLED
RecursiveDeleteOption: true
WorkGroupConfiguration:
@@ -430,8 +430,8 @@ Resources:
Type: AWS::Athena::WorkGroup
Condition: ScannersProbesAthenaLogParser
Properties:
- Name: WAFAppAccessLogAthenaQueryWorkGroup
- Description: Athena WorkGroup for CloudFront or ALB application access log queries used by AWS WAF Security Automations Solution
+ Name: !Join ['-', ['WAFAppAccessLogAthenaQueryWorkGroup', !Ref UUID]]
+ Description: Athena WorkGroup for CloudFront or ALB application access log queries used by Security Automations for AWS WAF Solution
State: ENABLED
RecursiveDeleteOption: true
WorkGroupConfiguration:
@@ -455,17 +455,17 @@ Outputs:
Value: !If [AlbEndpoint, !Ref ALBGlueAppAccessLogsTable, !Ref CloudFrontGlueAppAccessLogsTable]
WAFAddPartitionAthenaQueryWorkGroup:
- Description: Athena WorkGroup for adding Athena partition queries used by AWS WAF Security Automations Solution
+ Description: Athena WorkGroup for adding Athena partition queries used by Security Automations for AWS WAF Solution
Value: !Ref WAFAddPartitionAthenaQueryWorkGroup
Condition: AthenaLogParser
WAFLogAthenaQueryWorkGroup:
- Description: Athena WorkGroup for WAF log queries used by AWS WAF Security Automations Solution
+ Description: Athena WorkGroup for WAF log queries used by Security Automations for AWS WAF Solution
Value: !Ref WAFLogAthenaQueryWorkGroup
Condition: HttpFloodAthenaLogParser
WAFAppAccessLogAthenaQueryWorkGroup:
- Description: Athena WorkGroup for CloudFront or ALB application access log queries used by AWS WAF Security Automations Solution
+ Description: Athena WorkGroup for CloudFront or ALB application access log queries used by Security Automations for AWS WAF Solution
Value: !Ref WAFAppAccessLogAthenaQueryWorkGroup
Condition: ScannersProbesAthenaLogParser
diff --git a/deployment/aws-waf-security-automations-webacl.template b/deployment/aws-waf-security-automations-webacl.template
index d62b0a7f..e157a92d 100644
--- a/deployment/aws-waf-security-automations-webacl.template
+++ b/deployment/aws-waf-security-automations-webacl.template
@@ -3,8 +3,8 @@
AWSTemplateFormatVersion: 2010-09-09
Description: >-
- (SO0006-WebACL) - AWS WAF Security Automations %VERSION%: This AWS CloudFormation template helps
- you provision the AWS WAF Security Automations stack without worrying about creating and
+ (SO0006-WebACL) - Security Automations for AWS WAF %VERSION%: This AWS CloudFormation template helps
+ you provision the Security Automations for AWS WAF stack without worrying about creating and
configuring the underlying AWS infrastructure.
**WARNING** This template creates an AWS WAF Web ACL and Amazon CloudWatch custom metrics.
@@ -13,6 +13,26 @@ Description: >-
Parameters:
ActivateAWSManagedRulesParam:
Type: String
+ ActivateAWSManagedAPParam:
+ Type: String
+ ActivateAWSManagedKBIParam:
+ Type: String
+ ActivateAWSManagedIPRParam:
+ Type: String
+ ActivateAWSManagedAIPParam:
+ Type: String
+ ActivateAWSManagedSQLParam:
+ Type: String
+ ActivateAWSManagedLinuxParam:
+ Type: String
+ ActivateAWSManagedPOSIXParam:
+ Type: String
+ ActivateAWSManagedWindowsParam:
+ Type: String
+ ActivateAWSManagedPHPParam:
+ Type: String
+ ActivateAWSManagedWPParam:
+ Type: String
ActivateSqlInjectionProtectionParam:
Type: String
ActivateCrossSiteScriptingProtectionParam:
@@ -43,10 +63,50 @@ Parameters:
Type: String
Conditions:
- AWSManagedRulesActivated: !Equals
+ AWSManagedCRSActivated: !Equals
- !Ref ActivateAWSManagedRulesParam
- 'yes'
-
+
+ AWSManagedAPActivated: !Equals
+ - !Ref ActivateAWSManagedAPParam
+ - 'yes'
+
+ AWSManagedKBIActivated: !Equals
+ - !Ref ActivateAWSManagedKBIParam
+ - 'yes'
+
+ AWSManagedIPRActivated: !Equals
+ - !Ref ActivateAWSManagedIPRParam
+ - 'yes'
+
+ AWSManagedAIPActivated: !Equals
+ - !Ref ActivateAWSManagedAIPParam
+ - 'yes'
+
+ AWSManagedSQLActivated: !Equals
+ - !Ref ActivateAWSManagedSQLParam
+ - 'yes'
+
+ AWSManagedLinuxActivated: !Equals
+ - !Ref ActivateAWSManagedLinuxParam
+ - 'yes'
+
+ AWSManagedPOSIXActivated: !Equals
+ - !Ref ActivateAWSManagedPOSIXParam
+ - 'yes'
+
+ AWSManagedWindowsActivated: !Equals
+ - !Ref ActivateAWSManagedWindowsParam
+ - 'yes'
+
+ AWSManagedPHPActivated: !Equals
+ - !Ref ActivateAWSManagedPHPParam
+ - 'yes'
+
+ AWSManagedWPActivated: !Equals
+ - !Ref ActivateAWSManagedWPParam
+ - 'yes'
+
SqlInjectionProtectionActivated: !Not [!Equals [!Ref ActivateSqlInjectionProtectionParam, 'no']]
CrossSiteScriptingProtectionActivated: !Not [!Equals [!Ref ActivateCrossSiteScriptingProtectionParam, 'no']]
@@ -361,7 +421,7 @@ Resources:
Code:
S3Bucket: !Join ['-', [!FindInMap ["SourceCode", "General", "SourceBucket"], !Ref 'AWS::Region']]
S3Key: !Join ['/', [!FindInMap ["SourceCode", "General", "KeyPrefix"], 'timer.zip']]
- Runtime: python3.8
+ Runtime: python3.10
MemorySize: 128
Timeout: 300
Environment:
@@ -392,9 +452,9 @@ Resources:
Allow: {}
Rules:
- !If
- - AWSManagedRulesActivated
+ - AWSManagedCRSActivated
- Name: AWS-AWSManagedRulesCommonRuleSet
- Priority: 0
+ Priority: 6
OverrideAction:
None: {}
VisibilityConfig:
@@ -406,8 +466,158 @@ Resources:
VendorName: AWS
Name: AWSManagedRulesCommonRuleSet
- !Ref 'AWS::NoValue'
+ - !If
+ - AWSManagedAPActivated
+ - Name: AWS-AWSManagedRulesAdminProtectionRuleSet
+ Priority: 7
+ OverrideAction:
+ None: {}
+ VisibilityConfig:
+ SampledRequestsEnabled: true
+ CloudWatchMetricsEnabled: true
+ MetricName: !Join ['', [!Join ['', !Split ['-', !Ref ParentStackName]], 'AMRAP']]
+ Statement:
+ ManagedRuleGroupStatement:
+ VendorName: AWS
+ Name: AWSManagedRulesAdminProtectionRuleSet
+ - !Ref 'AWS::NoValue'
+ - !If
+ - AWSManagedKBIActivated
+ - Name: AWS-AWSManagedRulesKnownBadInputsRuleSet
+ Priority: 8
+ OverrideAction:
+ None: {}
+ VisibilityConfig:
+ SampledRequestsEnabled: true
+ CloudWatchMetricsEnabled: true
+ MetricName: !Join ['', [!Join ['', !Split ['-', !Ref ParentStackName]], 'AMRKBI']]
+ Statement:
+ ManagedRuleGroupStatement:
+ VendorName: AWS
+ Name: AWSManagedRulesKnownBadInputsRuleSet
+ - !Ref 'AWS::NoValue'
+ - !If
+ - AWSManagedIPRActivated
+ - Name: AWS-AWSManagedRulesAmazonIpReputationList
+ Priority: 2
+ OverrideAction:
+ None: {}
+ VisibilityConfig:
+ SampledRequestsEnabled: true
+ CloudWatchMetricsEnabled: true
+ MetricName: !Join ['', [!Join ['', !Split ['-', !Ref ParentStackName]], 'AMRIPR']]
+ Statement:
+ ManagedRuleGroupStatement:
+ VendorName: AWS
+ Name: AWSManagedRulesAmazonIpReputationList
+ - !Ref 'AWS::NoValue'
+ - !If
+ - AWSManagedAIPActivated
+ - Name: AWS-AWSManagedRulesAnonymousIpList
+ Priority: 4
+ OverrideAction:
+ None: {}
+ VisibilityConfig:
+ SampledRequestsEnabled: true
+ CloudWatchMetricsEnabled: true
+ MetricName: !Join ['', [!Join ['', !Split ['-', !Ref ParentStackName]], 'AMRAIP']]
+ Statement:
+ ManagedRuleGroupStatement:
+ VendorName: AWS
+ Name: AWSManagedRulesAnonymousIpList
+ - !Ref 'AWS::NoValue'
+ - !If
+ - AWSManagedSQLActivated
+ - Name: AWS-AWSManagedRulesSQLiRuleSet
+ Priority: 14
+ OverrideAction:
+ None: {}
+ VisibilityConfig:
+ SampledRequestsEnabled: true
+ CloudWatchMetricsEnabled: true
+ MetricName: !Join ['', [!Join ['', !Split ['-', !Ref ParentStackName]], 'AMRSQL']]
+ Statement:
+ ManagedRuleGroupStatement:
+ VendorName: AWS
+ Name: AWSManagedRulesSQLiRuleSet
+ - !Ref 'AWS::NoValue'
+ - !If
+ - AWSManagedLinuxActivated
+ - Name: AWS-AWSManagedRulesLinuxRuleSet
+ Priority: 11
+ OverrideAction:
+ None: {}
+ VisibilityConfig:
+ SampledRequestsEnabled: true
+ CloudWatchMetricsEnabled: true
+ MetricName: !Join ['', [!Join ['', !Split ['-', !Ref ParentStackName]], 'AMRLinux']]
+ Statement:
+ ManagedRuleGroupStatement:
+ VendorName: AWS
+ Name: AWSManagedRulesLinuxRuleSet
+ - !Ref 'AWS::NoValue'
+ - !If
+ - AWSManagedPOSIXActivated
+ - Name: AWS-AWSManagedRulesUnixRuleSet
+ Priority: 10
+ OverrideAction:
+ None: {}
+ VisibilityConfig:
+ SampledRequestsEnabled: true
+ CloudWatchMetricsEnabled: true
+ MetricName: !Join ['', [!Join ['', !Split ['-', !Ref ParentStackName]], 'AMRPOSIX']]
+ Statement:
+ ManagedRuleGroupStatement:
+ VendorName: AWS
+ Name: AWSManagedRulesUnixRuleSet
+ - !Ref 'AWS::NoValue'
+ - !If
+ - AWSManagedWindowsActivated
+ - Name: AWS-AWSManagedRulesWindowsRuleSet
+ Priority: 9
+ OverrideAction:
+ None: {}
+ VisibilityConfig:
+ SampledRequestsEnabled: true
+ CloudWatchMetricsEnabled: true
+ MetricName: !Join ['', [!Join ['', !Split ['-', !Ref ParentStackName]], 'AMRWindows']]
+ Statement:
+ ManagedRuleGroupStatement:
+ VendorName: AWS
+ Name: AWSManagedRulesWindowsRuleSet
+ - !Ref 'AWS::NoValue'
+ - !If
+ - AWSManagedPHPActivated
+ - Name: AWS-AWSManagedRulesPHPRuleSet
+ Priority: 12
+ OverrideAction:
+ None: {}
+ VisibilityConfig:
+ SampledRequestsEnabled: true
+ CloudWatchMetricsEnabled: true
+ MetricName: !Join ['', [!Join ['', !Split ['-', !Ref ParentStackName]], 'AMRPHP']]
+ Statement:
+ ManagedRuleGroupStatement:
+ VendorName: AWS
+ Name: AWSManagedRulesPHPRuleSet
+ - !Ref 'AWS::NoValue'
+ - !If
+ - AWSManagedWPActivated
+ - Name: AWS-AWSManagedRulesWordPressRuleSet
+ Priority: 13
+ OverrideAction:
+ None: {}
+ VisibilityConfig:
+ SampledRequestsEnabled: true
+ CloudWatchMetricsEnabled: true
+ MetricName: !Join ['', [!Join ['', !Split ['-', !Ref ParentStackName]], 'AMRWP']]
+ Statement:
+ ManagedRuleGroupStatement:
+ VendorName: AWS
+ Name: AWSManagedRulesWordPressRuleSet
+ - !Ref 'AWS::NoValue'
- Name: !Sub '${ParentStackName}WhitelistRule'
- Priority: 1
+ Priority: 0
Action:
Allow: {}
VisibilityConfig:
@@ -422,7 +632,7 @@ Resources:
- IPSetReferenceStatement:
Arn: !GetAtt WAFWhitelistSetV6.Arn
- Name: !Sub '${ParentStackName}BlacklistRule'
- Priority: 2
+ Priority: 1
Action:
Block: {}
VisibilityConfig:
@@ -439,7 +649,7 @@ Resources:
- !If
- HttpFloodProtectionLogParserActivated
- Name: !Sub '${ParentStackName}HttpFloodRegularRule'
- Priority: 3
+ Priority: 18
Action:
Block: {}
VisibilityConfig:
@@ -457,7 +667,7 @@ Resources:
- !If
- HttpFloodProtectionRateBasedRuleActivated
- Name: !Sub '${ParentStackName}HttpFloodRateBasedRule'
- Priority: 4
+ Priority: 19
Action:
Block: {}
VisibilityConfig:
@@ -472,7 +682,7 @@ Resources:
- !If
- ScannersProbesProtectionActivated
- Name: !Sub '${ParentStackName}ScannersAndProbesRule'
- Priority: 5
+ Priority: 17
Action:
Block: {}
VisibilityConfig:
@@ -490,7 +700,7 @@ Resources:
- !If
- ReputationListsProtectionActivated
- Name: !Sub '${ParentStackName}IPReputationListsRule'
- Priority: 6
+ Priority: 3
Action:
Block: {}
VisibilityConfig:
@@ -508,7 +718,7 @@ Resources:
- !If
- BadBotProtectionActivated
- Name: !Sub '${ParentStackName}BadBotRule'
- Priority: 7
+ Priority: 5
Action:
Block: {}
VisibilityConfig:
@@ -526,7 +736,7 @@ Resources:
- !If
- SqlInjectionProtectionActivated
- Name: !Sub '${ParentStackName}SqlInjectionRule'
- Priority: 20
+ Priority: 15
Action:
Block: {}
VisibilityConfig:
@@ -592,7 +802,7 @@ Resources:
- !If
- CrossSiteScriptingProtectionActivated
- Name: !Sub '${ParentStackName}XssRule'
- Priority: 30
+ Priority: 16
Action:
Block: {}
VisibilityConfig:
@@ -801,4 +1011,7 @@ Outputs:
WAFBadBotSetV6Id:
Value: !GetAtt WAFBadBotSetV6.Id
- Condition: BadBotProtectionActivated
\ No newline at end of file
+ Condition: BadBotProtectionActivated
+
+ CustomTimerFunctionName:
+ Value: !Ref CustomTimer
\ No newline at end of file
diff --git a/deployment/aws-waf-security-automations.template b/deployment/aws-waf-security-automations.template
index 8c650057..1e5a18fb 100644
--- a/deployment/aws-waf-security-automations.template
+++ b/deployment/aws-waf-security-automations.template
@@ -3,8 +3,8 @@
AWSTemplateFormatVersion: 2010-09-09
Description: >-
- (SO0006) - AWS WAF Security Automations %VERSION%: This AWS CloudFormation template helps
- you provision the AWS WAF Security Automations stack without worrying about creating and
+ (SO0006) - Security Automations for AWS WAF %VERSION%: This AWS CloudFormation template helps
+ you provision the Security Automations for AWS WAF stack without worrying about creating and
configuring the underlying AWS infrastructure.
**WARNING** This template creates multiple AWS Lambda functions, an AWS WAFv2 Web ACL, an Amazon S3 bucket,
@@ -15,37 +15,106 @@ Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
- default: Protection List
+ default: Resource Type
+ Parameters:
+ - EndpointType
+ - Label:
+ default: AWS Managed IP Reputation Rule Groups
+ Parameters:
+ - ActivateAWSManagedIPRParam
+ - ActivateAWSManagedAIPParam
+ - Label:
+ default: AWS Managed Baseline Rule Groups
Parameters:
- ActivateAWSManagedRulesParam
- - ActivateSqlInjectionProtectionParam
- - SqlInjectionProtectionSensitivityLevelParam
- - ActivateCrossSiteScriptingProtectionParam
- - ActivateHttpFloodProtectionParam
- - ActivateScannersProbesProtectionParam
- - ActivateReputationListsProtectionParam
- - ActivateBadBotProtectionParam
-
+ - ActivateAWSManagedAPParam
+ - ActivateAWSManagedKBIParam
- Label:
- default: Log Monitoring Settings
+ default: AWS Managed Use-case Specific Rule Groups
Parameters:
- - EndpointType
+ - ActivateAWSManagedSQLParam
+ - ActivateAWSManagedLinuxParam
+ - ActivateAWSManagedPOSIXParam
+ - ActivateAWSManagedWindowsParam
+ - ActivateAWSManagedPHPParam
+ - ActivateAWSManagedWPParam
+ - Label:
+ default: Custom Rule - Scanner & Probes
+ Parameters:
+ - ActivateScannersProbesProtectionParam
- AppAccessLogBucket
+ - AppAccessLogBucketPrefixParam
+ - AppAccessLogBucketLoggingStatusParam
- ErrorThreshold
+ - KeepDataInOriginalS3Location
+ - Label:
+ default: Custom Rule - HTTP Flood
+ Parameters:
+ - ActivateHttpFloodProtectionParam
- RequestThreshold
+ - RequestThresholdByCountryParam
+ - HTTPFloodAthenaQueryGroupByParam
- WAFBlockPeriod
- - KeepDataInOriginalS3Location
-
+ - AthenaQueryRunTimeScheduleParam
+ - Label:
+ default: Custom Rule - Bad Bot
+ Parameters:
+ - ActivateBadBotProtectionParam
+ - ApiGatewayBadBotCWRoleParam
+ - Label:
+ default: Custom Rule - Third Party IP Reputation Lists
+ Parameters:
+ - ActivateReputationListsProtectionParam
+ - Label:
+ default: Legacy Custom Rules
+ Parameters:
+ - ActivateSqlInjectionProtectionParam
+ - SqlInjectionProtectionSensitivityLevelParam
+ - ActivateCrossSiteScriptingProtectionParam
- Label:
- default: IP Retention Settings
+ default: Allowed and Denied IP Retention Settings
Parameters:
- IPRetentionPeriodAllowedParam
- IPRetentionPeriodDeniedParam
- SNSEmailParam
+ - Label:
+ default: Advanced Settings
+ Parameters:
+ - LogGroupRetentionParam
ParameterLabels:
ActivateAWSManagedRulesParam:
- default: Activate AWS Managed Rules Protection
+ default: Activate Core Rule Set Managed Rule Group Protection
+
+ ActivateAWSManagedAPParam:
+ default: Activate Admin Protection Managed Rule Group Protection
+
+ ActivateAWSManagedKBIParam:
+ default: Activate Known Bad Inputs Managed Rule Group Protection
+
+ ActivateAWSManagedIPRParam:
+ default: Activate Amazon IP reputation List Managed Rule Group Protection
+
+ ActivateAWSManagedAIPParam:
+ default: Activate Anonymous IP List Managed Rule Group Protection
+
+ ActivateAWSManagedSQLParam:
+ default: Activate SQL Database Managed Rule Group Protection
+
+ ActivateAWSManagedLinuxParam:
+ default: Activate Linux Operating System Managed Rule Group Protection
+
+ ActivateAWSManagedPOSIXParam:
+ default: Activate POSIX Operating System Managed Rule Group Protection
+
+ ActivateAWSManagedWindowsParam:
+ default: Activate Windows Operating System Managed Rule Group Protection
+
+ ActivateAWSManagedPHPParam:
+ default: Activate PHP Application Managed Rule Group Protection
+
+ ActivateAWSManagedWPParam:
+ default: Activate WordPress Application Managed Rule Group Protection
ActivateSqlInjectionProtectionParam:
default: Activate SQL Injection Protection
@@ -68,21 +137,39 @@ Metadata:
ActivateBadBotProtectionParam:
default: Activate Bad Bot Protection
+ ApiGatewayBadBotCWRoleParam:
+ default: ARN of an IAM role that has write access to CloudWatch logs in your account
+
EndpointType:
- default: Endpoint Type
+ default: Endpoint
AppAccessLogBucket:
default: Application Access Log Bucket Name
+ AppAccessLogBucketPrefixParam:
+ default: Application Access Log Bucket Prefix
+
+ AppAccessLogBucketLoggingStatusParam:
+ default: Is bucket access logging turned on?
+
ErrorThreshold:
default: Error Threshold
RequestThreshold:
- default: Request Threshold
+ default: Default Request Threshold
+
+ RequestThresholdByCountryParam:
+ default: Request Threshold by Country
+
+ HTTPFloodAthenaQueryGroupByParam:
+ default: Group By Requests in HTTP Flood Athena Query
WAFBlockPeriod:
default: WAF Block Period
+ AthenaQueryRunTimeScheduleParam:
+ default: Athena Query Run Time Schedule (Minute)
+
KeepDataInOriginalS3Location:
default: Keep Data in Original S3 Location
@@ -93,7 +180,10 @@ Metadata:
default: Retention Period (Minutes) for Denied IP Set
SNSEmailParam:
- default: Email for receiving notifcation upon Allowed or Denied IP Sets expiration
+ default: Email for receiving notification upon Allowed or Denied IP Sets expiration
+
+ LogGroupRetentionParam:
+ default: Retention Period (Days) for Log Groups
Parameters:
@@ -103,7 +193,134 @@ Parameters:
AllowedValues:
- 'yes'
- 'no'
- Description: Choose yes to enable the AWS Managed Rules.
+ Description: >-
+ Core Rule Set provides protection against exploitation of a wide range of vulnerabilities,
+ including some of the high risk and commonly occurring vulnerabilities. Consider using
+ this rule group for any AWS WAF use case. Required WCU: 700. Your account should have
+ sufficient WCU capacity to avoid WebACL stack deployment failure due to exceeding the
+ capacity limit.
+
+ ActivateAWSManagedAPParam:
+ Type: String
+ Default: 'no'
+ AllowedValues:
+ - 'yes'
+ - 'no'
+ Description: >-
+ The Admin protection rule group blocks external access to exposed administrative pages.
+ This might be useful if you run third-party software or want to reduce the risk of a
+ malicious actor gaining administrative access to your application. Required WCU: 100.
+
+ ActivateAWSManagedKBIParam:
+ Type: String
+ Default: 'no'
+ AllowedValues:
+ - 'yes'
+ - 'no'
+ Description: >-
+ The Known bad inputs rule group blocks request patterns that are known to be invalid and
+ are associated with exploitation or discovery of vulnerabilities. This can help reduce
+ the risk of a malicious actor discovering a vulnerable application. Required WCU: 200.
+
+ ActivateAWSManagedIPRParam:
+ Type: String
+ Default: 'no'
+ AllowedValues:
+ - 'yes'
+ - 'no'
+ Description: >-
+ The Amazon IP reputation list rule group are based on Amazon internal threat intelligence.
+ This is useful if you would like to block IP addresses typically associated with bots or
+ other threats. Blocking these IP addresses can help mitigate bots and reduce the risk of
+ a malicious actor discovering a vulnerable application. Required WCU: 25.
+
+ ActivateAWSManagedAIPParam:
+ Type: String
+ Default: 'no'
+ AllowedValues:
+ - 'yes'
+ - 'no'
+ Description: >-
+ The Anonymous IP list rule group blocks requests from services that permit the obfuscation of
+ viewer identity. These include requests from VPNs, proxies, Tor nodes, and hosting providers.
+ This rule group is useful if you want to filter out viewers that might be trying to hide their
+ identity from your application. Blocking the IP addresses of these services can help mitigate
+ bots and evasion of geographic restrictions. Required WCU: 50.
+
+ ActivateAWSManagedSQLParam:
+ Type: String
+ Default: 'no'
+ AllowedValues:
+ - 'yes'
+ - 'no'
+ Description: >-
+ The SQL database rule group blocks request patterns associated with exploitation of SQL databases,
+ like SQL injection attacks. This can help prevent remote injection of unauthorized queries. Evaluate
+ this rule group for use if your application interfaces with an SQL database. Using the SQL injection
+ custom rule is optional, if you already have AWS managed SQL rule group activated. Required WCU: 200.
+
+ ActivateAWSManagedLinuxParam:
+ Type: String
+ Default: 'no'
+ AllowedValues:
+ - 'yes'
+ - 'no'
+ Description: >-
+ The Linux operating system rule group blocks request patterns associated with the exploitation of
+ vulnerabilities specific to Linux, including Linux-specific Local File Inclusion (LFI) attacks.
+ This can help prevent attacks that expose file contents or run code for which the attacker should
+ not have had access. Evaluate this rule group if any part of your application runs on Linux. You
+ should use this rule group in conjunction with the POSIX operating system rule group. Required WCU: 200.
+
+ ActivateAWSManagedPOSIXParam:
+ Type: String
+ Default: 'no'
+ AllowedValues:
+ - 'yes'
+ - 'no'
+ Description: >-
+ The POSIX operating system rule group blocks request patterns associated with the exploitation of
+ vulnerabilities specific to POSIX and POSIX-like operating systems, including Local File Inclusion
+ (LFI) attacks. This can help prevent attacks that expose file contents or run code for which the
+ attacker should not have had access. Evaluate this rule group if any part of your application runs
+ on a POSIX or POSIX-like operating system. Required WCU: 100.
+
+ ActivateAWSManagedWindowsParam:
+ Type: String
+ Default: 'no'
+ AllowedValues:
+ - 'yes'
+ - 'no'
+ Description: >-
+ The Windows operating system rule group blocks request patterns associated with the exploitation of
+ vulnerabilities specific to Windows, like remote execution of PowerShell commands. This can help
+ prevent exploitation of vulnerabilities that permit an attacker to run unauthorized commands or run
+ malicious code. Evaluate this rule group if any part of your application runs on a Windows operating
+ system. Required WCU: 200.
+
+ ActivateAWSManagedPHPParam:
+ Type: String
+ Default: 'no'
+ AllowedValues:
+ - 'yes'
+ - 'no'
+ Description: >-
+ The PHP application rule group blocks request patterns associated with the exploitation of vulnerabilities
+ specific to the use of the PHP programming language, including injection of unsafe PHP functions. This can
+ help prevent exploitation of vulnerabilities that permit an attacker to remotely run code or commands for
+ which they are not authorized. Evaluate this rule group if PHP is installed on any server with which your
+ application interfaces. Required WCU: 100.
+
+ ActivateAWSManagedWPParam:
+ Type: String
+ Default: 'no'
+ AllowedValues:
+ - 'yes'
+ - 'no'
+ Description: >-
+ The WordPress application rule group blocks request patterns associated with the exploitation of vulnerabilities
+ specific to WordPress sites. Evaluate this rule group if you are running WordPress. This rule group should be
+ used in conjunction with the SQL database and PHP application rule groups. Required WCU: 100.
ActivateSqlInjectionProtectionParam:
Type: String
@@ -114,9 +331,10 @@ Parameters:
- 'yes - NO_MATCH'
- 'no'
Description: >-
- Choose yes to deploy the default SQL injection protection rule designed to block common SQL injection attacks.
- It uses CONTINUE option for oversized request handling by default. Note: If you customized the rule outside of CloudFormation,
- your changes will be overwritten after stack update.
+ Choose yes to deploy the default SQL injection protection rule designed to block common SQL injection attacks.
+ Consider activating it if you are not using Core Rule Set or AWS managed SQL database rule group. The 'yes'
+ option uses CONTINUE for oversized request handling by default. Note: If you customized the rule outside of
+ CloudFormation, your changes will be overwritten after stack update.
SqlInjectionProtectionSensitivityLevelParam:
Type: String
@@ -140,8 +358,8 @@ Parameters:
- 'no'
Description: >-
Choose yes to deploy the default cross-site scripting protection rule designed to block common cross-site scripting attacks.
- It uses CONTINUE option for oversized request handling by default. Note: If you customized the rule outside of CloudFormation,
- your changes will be overwritten after stack update.
+ Consider activating it if you are not using Core Rule Set. The 'yes' option uses CONTINUE for oversized request handling by
+ default. Note: If you customized the rule outside of CloudFormation, your changes will be overwritten after stack update.
ActivateHttpFloodProtectionParam:
Type: String
@@ -151,7 +369,7 @@ Parameters:
- 'yes - AWS Lambda log parser'
- 'yes - Amazon Athena log parser'
- 'no'
- Description: Choose yes to enable the component designed to block HTTP flood attacks.
+ Description: Choose yes to activate the component designed to block HTTP flood attacks.
ActivateScannersProbesProtectionParam:
Type: String
@@ -160,7 +378,7 @@ Parameters:
- 'yes - AWS Lambda log parser'
- 'yes - Amazon Athena log parser'
- 'no'
- Description: Choose yes to enable the component designed to block scanners and probes.
+ Description: Choose yes to activate the component designed to block scanners and probes.
ActivateReputationListsProtectionParam:
Type: String
@@ -178,7 +396,17 @@ Parameters:
AllowedValues:
- 'yes'
- 'no'
- Description: Choose yes to enable the component designed to block bad bots and content scrapers.
+ Description: Choose yes to activate the component designed to block bad bots and content scrapers.
+
+ ApiGatewayBadBotCWRoleParam:
+ Type: String
+ Default: ''
+ Description: >-
+ Provide an optional ARN of an IAM role that has write access to CloudWatch logs in your
+ account. Example ARN: arn:aws:iam::account_id:role/myrolename.
+ See https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-logging.html
+ for instructions on how to create the role. If you leave it blank (default), a new role
+ will be created for you.
EndpointType:
Type: String
@@ -186,7 +414,7 @@ Parameters:
AllowedValues:
- 'CloudFront'
- 'ALB'
- Description: Select the type of resource being used.
+ Description: Select the resource type and then select the resource below that you want to associate with this web ACL.
AppAccessLogBucket:
Type: String
@@ -194,9 +422,31 @@ Parameters:
AllowedPattern: '(^$|^([a-z]|(\d(?!\d{0,2}\.\d{1,3}\.\d{1,3}\.\d{1,3})))([a-z\d]|(\.(?!(\.|-)))|(-(?!\.))){1,61}[a-z\d]$)'
Description: >-
If you chose yes for the Activate Scanners & Probes Protection parameter, enter a name for the
- Amazon S3 bucket where you want to store access logs for your CloudFront distribution or Application
- Load Balancer. More about bucket name restriction here: http://amzn.to/1p1YlU5.
- If you chose to deactivate this protection, ignore this parameter.
+ Amazon S3 bucket (new or existing) where you want to store access logs for your CloudFront
+ distribution or Application Load Balancer. More about bucket name restriction here:
+ http://amzn.to/1p1YlU5. If you chose to deactivate this protection, ignore this parameter.
+
+ AppAccessLogBucketPrefixParam:
+ Type: String
+ Default: 'AWSLogs/'
+ Description: >-
+ If you chose yes for the Activate Scanners & Probes Protection parameter, you can enter
+ an optional user defined prefix for the application access logs bucket above. For ALB resource,
+ you must append AWSLogs/ to your prefix such as yourprefix/AWSLogs/. For CloudFront resource,
+ you can enter any prefix such as yourprefix/. Leave it to AWSLogs/ (default) if there isn't a
+ user-defined prefix. If you chose to deactivate this protection, ignore this parameter.
+
+ AppAccessLogBucketLoggingStatusParam:
+ Type: String
+ Default: 'no'
+ AllowedValues:
+ - 'yes'
+ - 'no'
+ Description: >-
+ Choose yes if you provided an existing application access log bucket above and the server access
+ logging for the bucket is already turned on. If you chose no, the solution will turn on server
+ access logging for your bucket. If you deactivate Scanners & Probes Protection, ignore this
+ parameter.
ErrorThreshold:
Type: Number
@@ -213,10 +463,41 @@ Parameters:
MinValue: 0
Description: >-
If you chose yes for the Activate HTTP Flood Protection parameter, enter the maximum
- acceptable requests per FIVE-minute period per IP address. Please note that AWS WAF rate
- based rule requires values greater than 100 (if you chose Lambda/Athena log parser options,
- you can use any value greater than zero). If you chose to deactivate this protection, ignore
- this parameter.
+ acceptable requests per IP address per FIVE-minute period (default). You can change
+ the time period by entering a different number for Athena Query Run Time Schedule below.
+ The request threshold is divided by this number to get the desired threshold per
+ minute that is used in Athena query. Note: AWS WAF rate based rule requires a value
+ greater than 100 (if you chose Lambda/Athena log parser options, you can use any value
+ greater than zero). If you chose to deactivate this protection, ignore this parameter.
+
+ RequestThresholdByCountryParam:
+ Type: String
+ Default: ''
+ AllowedPattern: '^$|^\{"\w+":\d+([,]"\w+":\d+)*\}+$'
+ Description: >-
+ If you chose Athena Log Parser to activate HTTP Flood Protection, you can enter a threshold
+ by country following this JSON format {"TR":50,"ER":150}. These thresholds will be used for
+ the requests originated from the specified countries, while the default threshold above
+ will be used for the remaining requests. The threshold is calculated in a default FIVE-minute
+ period. You can change the time period by entering a different number for Athena Query Run Time
+ Schedule below. The request threshold is divided by this number to get the desired threshold
+ per minute that is used in Athena query. Note: If you define a threshold by country, country
+ will automatically be included in Athena query group-by clause, along with ip and other group-by
+ fields you may select below. If you chose to deactivate this protection, ignore this parameter.
+
+ HTTPFloodAthenaQueryGroupByParam:
+ Type: String
+ Default: 'None'
+ AllowedValues:
+ - 'Country'
+ - 'URI'
+ - 'Country and URI'
+ - 'None'
+ Description: >-
+ If you chose Athena Log Parser to activate HTTP Flood Protection, you can select a group-by field
+ to count requests per IP along with the selected group-by field. For example, if URI is selected,
+ the requests will be counted per IP and URI. If you chose to deactivate this protection,
+ ignore this parameter.
WAFBlockPeriod:
Type: Number
@@ -227,6 +508,17 @@ Parameters:
parser parameters, enter the period (in minutes) to block applicable IP addresses. If you
chose to deactivate log parsing, ignore this parameter.
+ AthenaQueryRunTimeScheduleParam:
+ Type: Number
+ Default: 5
+ MinValue: 1
+ Description: >-
+ If you chose Athena Log Parser to activate Scanners & Probes Protection or HTTP Flood Protection,
+ you can enter a time interval (in minutes) over which the Athena query runs. By default, the Athena
+ query runs every 5 minutes. Request threshold entered above is divided by this number to get the
+ threshold per minute in the Athena query. If you chose to deactivate these protections, ignore this
+ parameter.
+
KeepDataInOriginalS3Location:
Type: String
Default: 'No'
@@ -267,6 +559,37 @@ Parameters:
If you activated IP retention period above and want to receive an email notification when IP addresses expire, enter a valid email address.
If you did not activate IP retention or want to disable email notification, leave it blank (default).
+ LogGroupRetentionParam:
+ Type: Number
+ Default: 365
+ AllowedValues:
+ - -1
+ - 1
+ - 3
+ - 5
+ - 7
+ - 14
+ - 30
+ - 60
+ - 90
+ - 120
+ - 150
+ - 180
+ - 365
+ - 400
+ - 545
+ - 731
+ - 1827
+ - 2192
+ - 2557
+ - 2922
+ - 3288
+ - 3653
+ Description: >-
+ If you want to activate retention for the CloudWatch Log Groups, enter a number (1 or above) as the retention period (days).
+ You can choose a retention period between one day and 10 years. By default logs will expired after 1 year. Set it to -1 to
+ keep the logs indefinitely.
+
Conditions:
HttpFloodProtectionRateBasedRuleActivated: !Equals
- !Ref ActivateHttpFloodProtectionParam
@@ -316,6 +639,12 @@ Conditions:
- !Ref ActivateBadBotProtectionParam
- 'yes'
+ ApiGatewayBadBotCWRoleNotExists: !Equals [!Ref ApiGatewayBadBotCWRoleParam, '']
+
+ CreateApiGatewayBadBotCloudWatchRole: !And
+ - Condition: BadBotProtectionActivated
+ - Condition: ApiGatewayBadBotCWRoleNotExists
+
AlbEndpoint: !Equals
- !Ref EndpointType
- 'ALB'
@@ -338,6 +667,26 @@ Conditions:
- Condition: IPRetentionPeriod
- Condition: SNSEmailProvided
+ AppAccessLogBucketLoggingOff: !Equals
+ - !Ref AppAccessLogBucketLoggingStatusParam
+ - 'no'
+
+ TurnOnAppAccessLogBucketLogging: !And
+ - Condition: ScannersProbesProtectionActivated
+ - Condition: AppAccessLogBucketLoggingOff
+
+ CreateS3LoggingBucket: !Or
+ - Condition: HttpFloodProtectionLogParserActivated
+ - Condition: TurnOnAppAccessLogBucketLogging
+
+ UserDefinedAppAccessLogBucketPrefix: !Not [!Equals [!Ref AppAccessLogBucketPrefixParam, 'AWSLogs/']]
+
+ RequestThresholdByCountry: !Not [!Equals [!Ref RequestThresholdByCountryParam, '']]
+
+ IsAthenaQueryRunEveryMinute: !Equals [!Ref AthenaQueryRunTimeScheduleParam, 1]
+
+ LogGroupRetentionEnabled: !Not [!Equals [!Ref LogGroupRetentionParam, -1]]
+
Mappings:
SourceCode:
General:
@@ -360,8 +709,6 @@ Mappings:
WAFScannersProbesRule: 'BLOCK'
WAFIPReputationListsRule: 'BLOCK'
WAFBadBotRule: 'BLOCK'
- Athena:
- QueryScheduledRunTime: 5 # by default athena query runs every 5 minutes, update it if needed
UserAgent:
UserAgentExtra: 'AwsSolution/SO0006/%VERSION%'
AppRegistry:
@@ -420,6 +767,16 @@ Resources:
KeyPrefix: !FindInMap ["SourceCode", "General", "KeyPrefix"]
Parameters:
ActivateAWSManagedRulesParam: !Ref ActivateAWSManagedRulesParam
+ ActivateAWSManagedAPParam: !Ref ActivateAWSManagedAPParam
+ ActivateAWSManagedKBIParam: !Ref ActivateAWSManagedKBIParam
+ ActivateAWSManagedIPRParam: !Ref ActivateAWSManagedIPRParam
+ ActivateAWSManagedAIPParam: !Ref ActivateAWSManagedAIPParam
+ ActivateAWSManagedSQLParam: !Ref ActivateAWSManagedSQLParam
+ ActivateAWSManagedLinuxParam: !Ref ActivateAWSManagedLinuxParam
+ ActivateAWSManagedPOSIXParam: !Ref ActivateAWSManagedPOSIXParam
+ ActivateAWSManagedWindowsParam: !Ref ActivateAWSManagedWindowsParam
+ ActivateAWSManagedPHPParam: !Ref ActivateAWSManagedPHPParam
+ ActivateAWSManagedWPParam: !Ref ActivateAWSManagedWPParam
ActivateSqlInjectionProtectionParam: !Ref ActivateSqlInjectionProtectionParam
ActivateCrossSiteScriptingProtectionParam: !Ref ActivateCrossSiteScriptingProtectionParam
SqlInjectionProtectionSensitivityLevelParam: !Ref SqlInjectionProtectionSensitivityLevelParam
@@ -899,6 +1256,24 @@ Resources:
- 'logs:PutLogEvents'
Resource:
- !Sub 'arn:${AWS::Partition}:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/lambda/*CustomResource*'
+ - PolicyName: LogsGroupAccess
+ PolicyDocument:
+ Version: 2012-10-17
+ Statement:
+ - Effect: Allow
+ Action:
+ - 'logs:DescribeLogGroups'
+ Resource:
+ - !Sub 'arn:${AWS::Partition}:logs:${AWS::Region}:${AWS::AccountId}:log-group:*'
+ - PolicyName: LogsGroupRetentionAccess
+ PolicyDocument:
+ Version: 2012-10-17
+ Statement:
+ - Effect: Allow
+ Action:
+ - 'logs:PutRetentionPolicy'
+ Resource:
+ - !Sub 'arn:${AWS::Partition}:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/lambda/*'
- !If
- ScannersProbesProtectionActivated
- PolicyName: S3BucketLoggingAccess
@@ -1081,7 +1456,7 @@ Resources:
- 'logs:PutLogEvents'
Resource:
- !Sub 'arn:${AWS::Partition}:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/lambda/*AddAthenaPartitions*'
-
+
Helper:
Type: 'AWS::Lambda::Function'
Properties:
@@ -1097,7 +1472,7 @@ Resources:
LOG_LEVEL: !FindInMap ["Solution", "Data", "LogLevel"]
SCOPE: !If [AlbEndpoint, 'REGIONAL', 'CLOUDFRONT']
USER_AGENT_EXTRA: !FindInMap [Solution, UserAgent, UserAgentExtra]
- Runtime: python3.8
+ Runtime: python3.10
MemorySize: 128
Timeout: 300
Metadata:
@@ -1239,6 +1614,26 @@ Resources:
Properties:
ServiceToken: !GetAtt Helper.Arn
+ SetCloudWatchLogGroupRetention:
+ Type: 'Custom::SetCloudWatchLogGroupRetention'
+ Condition: LogGroupRetentionEnabled
+ DependsOn: CheckRequirements
+ Properties:
+ ServiceToken: !GetAtt CustomResource.Arn
+ StackName: !Ref 'AWS::StackName'
+ SolutionVersion: "%VERSION%"
+ LogGroupRetention: !Ref LogGroupRetentionParam
+ LogParserLambdaName: !If [LogParser, !Ref LogParser, !Ref 'AWS::NoValue']
+ HelperLambdaName: !Ref Helper
+ MoveS3LogsForPartitionLambdaName: !If [ScannersProbesAthenaLogParser, !Ref MoveS3LogsForPartition, !Ref 'AWS::NoValue']
+ AddAthenaPartitionsLambdaName: !If [AthenaLogParser, !Ref AddAthenaPartitions, !Ref 'AWS::NoValue']
+ SetIPRetentionLambdaName: !If [IPRetentionPeriod, !Ref SetIPRetention, !Ref 'AWS::NoValue']
+ RemoveExpiredIPLambdaName: !If [IPRetentionPeriod, !Ref RemoveExpiredIP, !Ref 'AWS::NoValue']
+ ReputationListsParserLambdaName: !If [ReputationListsProtectionActivated, !Ref ReputationListsParser, !Ref 'AWS::NoValue']
+ BadBotParserLambdaName: !If [BadBotProtectionActivated, !Ref BadBotParser, !Ref 'AWS::NoValue']
+ CustomResourceLambdaName: !Ref CustomResource
+ CustomTimerLambdaName: !GetAtt WebACLStack.Outputs.CustomTimerFunctionName
+
CreateDeliveryStreamName:
Type: 'Custom::CreateDeliveryStreamName'
Condition: HttpFloodProtectionLogParserActivated
@@ -1284,7 +1679,7 @@ Resources:
AccessLoggingBucket:
Type: AWS::S3::Bucket
- Condition: LogParser
+ Condition: CreateS3LoggingBucket
DependsOn: CheckRequirements
DeletionPolicy: Retain
UpdateReplacePolicy: Retain
@@ -1306,7 +1701,7 @@ Resources:
AccessLoggingBucketPolicy:
Type: AWS::S3::BucketPolicy
- Condition: LogParser
+ Condition: CreateS3LoggingBucket
Properties:
Bucket:
Ref: AccessLoggingBucket
@@ -1347,7 +1742,7 @@ Resources:
Description: >-
This function parses access logs to identify suspicious behavior, such as an abnormal amount of errors.
It then blocks those IP addresses for a customer-defined period of time.
- Handler: 'log-parser.lambda_handler'
+ Handler: 'log_parser.lambda_handler'
Role: !GetAtt LambdaRoleLogParser.Arn
Code:
S3Bucket: !Join ['-', [!FindInMap ["SourceCode", "General", "SourceBucket"], !Ref 'AWS::Region']]
@@ -1377,10 +1772,13 @@ Resources:
WAF_BLOCK_PERIOD: !Ref WAFBlockPeriod
ERROR_THRESHOLD: !Ref ErrorThreshold
REQUEST_THRESHOLD: !Ref RequestThreshold
+ REQUEST_THRESHOLD_BY_COUNTRY: !Ref RequestThresholdByCountryParam
+ HTTP_FLOOD_ATHENA_GROUP_BY: !Ref HTTPFloodAthenaQueryGroupByParam
+ ATHENA_QUERY_RUN_SCHEDULE: !Ref AthenaQueryRunTimeScheduleParam
SOLUTION_ID: !FindInMap [Solution, Data, SolutionID]
METRICS_URL: !FindInMap [Solution, Data, MetricsURL]
USER_AGENT_EXTRA: !FindInMap [Solution, UserAgent, UserAgentExtra]
- Runtime: python3.8
+ Runtime: python3.10
MemorySize: 512
Timeout: 300
Metadata:
@@ -1412,7 +1810,7 @@ Resources:
KEEP_ORIGINAL_DATA: !Ref KeepDataInOriginalS3Location
ENDPOINT: !Ref EndpointType
USER_AGENT_EXTRA: !FindInMap [Solution, UserAgent, UserAgentExtra]
- Runtime: python3.8
+ Runtime: python3.10
MemorySize: 512
Timeout: 300
Metadata:
@@ -1441,7 +1839,7 @@ Resources:
Variables:
LOG_LEVEL: !FindInMap ["Solution", "Data", "LogLevel"]
USER_AGENT_EXTRA: !FindInMap [Solution, UserAgent, UserAgentExtra]
- Runtime: python3.8
+ Runtime: python3.10
MemorySize: 512
Timeout: 300
Metadata:
@@ -1470,11 +1868,11 @@ Resources:
LOG_LEVEL: !FindInMap ["Solution", "Data", "LogLevel"]
TABLE_NAME: !Ref IPRetentionDDBTable
STACK_NAME: !Ref 'AWS::StackName'
- IP_RETENTION_PEROID_ALLOWED_MINUTE: !Ref IPRetentionPeriodAllowedParam
- IP_RETENTION_PEROID_DENIED_MINUTE: !Ref IPRetentionPeriodDeniedParam
+ IP_RETENTION_PERIOD_ALLOWED_MINUTE: !Ref IPRetentionPeriodAllowedParam
+ IP_RETENTION_PERIOD_DENIED_MINUTE: !Ref IPRetentionPeriodDeniedParam
REMOVE_EXPIRED_IP_LAMBDA_ROLE_NAME: !Ref LambdaRoleRemoveExpiredIP
USER_AGENT_EXTRA: !FindInMap [Solution, UserAgent, UserAgentExtra]
- Runtime: python3.8
+ Runtime: python3.10
MemorySize: 128
Timeout: 300
Metadata:
@@ -1506,7 +1904,7 @@ Resources:
SOLUTION_ID: !FindInMap [Solution, Data, SolutionID]
METRICS_URL: !FindInMap [Solution, Data, MetricsURL]
USER_AGENT_EXTRA: !FindInMap [Solution, UserAgent, UserAgentExtra]
- Runtime: python3.8
+ Runtime: python3.10
MemorySize: 512
Timeout: 300
Metadata:
@@ -1558,7 +1956,12 @@ Resources:
Condition: HttpFloodAthenaLogParser
Properties:
Description: Security Automation - WAF Logs Athena parser
- ScheduleExpression: !Join ['', ['rate(', !FindInMap ["Solution", "Athena", "QueryScheduledRunTime"], ' minutes)']]
+ ScheduleExpression:
+ !If [
+ IsAthenaQueryRunEveryMinute,
+ "rate(1 minute)",
+ !Join ["", ["rate(", !Ref AthenaQueryRunTimeScheduleParam, " minutes)"]],
+ ]
Targets:
- Arn: !GetAtt LogParser.Arn
Id: LogParser
@@ -1585,7 +1988,12 @@ Resources:
Condition: ScannersProbesAthenaLogParser
Properties:
Description: Security Automation - App Logs Athena parser
- ScheduleExpression: rate(5 minutes)
+ ScheduleExpression:
+ !If [
+ IsAthenaQueryRunEveryMinute,
+ "rate(1 minute)",
+ !Join ["", ["rate(", !Ref AthenaQueryRunTimeScheduleParam, " minutes)"]],
+ ]
Targets:
- Arn: !GetAtt LogParser.Arn
Id: LogParser
@@ -1631,7 +2039,7 @@ Resources:
Type: AWS::Events::Rule
Condition: IPRetentionPeriod
Properties:
- Description: AWS WAF Security Automations - Events rule for setting IP retention
+ Description: Security Automations for AWS WAF - Events rule for setting IP retention
EventPattern:
source:
- aws.wafv2
@@ -1673,12 +2081,12 @@ Resources:
This lambda function checks third-party IP reputation lists hourly for new IP ranges to
block. These lists include the Spamhaus Dont Route Or Peer (DROP) and Extended Drop (EDROP)
lists, the Proofpoint Emerging Threats IP list, and the Tor exit node list.
- Handler: 'reputation-lists.lambda_handler'
+ Handler: 'reputation_lists.lambda_handler'
Role: !GetAtt LambdaRoleReputationListsParser.Arn
Code:
S3Bucket: !Join ['-', [!FindInMap ["SourceCode", "General", "SourceBucket"], !Ref 'AWS::Region']]
S3Key: !Join ['/', [!FindInMap ["SourceCode", "General", "KeyPrefix"], 'reputation_lists_parser.zip']]
- Runtime: python3.8
+ Runtime: python3.10
MemorySize: 512
Timeout: 300
Environment:
@@ -1707,7 +2115,6 @@ Resources:
- id: W58
reason: "Log permissions are defined in the LambdaRoleReputationListsParser policies"
-
ReputationListsParserEventsRule:
Condition: ReputationListsProtectionActivated
Type: 'AWS::Events::Rule'
@@ -1762,7 +2169,7 @@ Resources:
Description: >-
This lambda function will intercepts and inspects trap endpoint requests to extract its IP
address, and then add it to an AWS WAF block list.
- Handler: 'access-handler.lambda_handler'
+ Handler: 'access_handler.lambda_handler'
Role: !GetAtt LambdaRoleBadBot.Arn
Code:
S3Bucket: !Join ['-', [!FindInMap ["SourceCode", "General", "SourceBucket"], !Ref 'AWS::Region']]
@@ -1784,7 +2191,7 @@ Resources:
METRICS_URL: !FindInMap [Solution, Data, MetricsURL]
STACK_NAME: !Ref 'AWS::StackName'
USER_AGENT_EXTRA: !FindInMap [Solution, UserAgent, UserAgentExtra]
- Runtime: python3.8
+ Runtime: python3.10
MemorySize: 128
Timeout: 300
Metadata:
@@ -1919,10 +2326,12 @@ Resources:
-
id: W86
reason: "Leave the configuration of the expiration of the log data in CloudWatch log group to user due to potential compliance regulations."
+ Properties:
+ RetentionInDays: !If [LogGroupRetentionEnabled, !Ref LogGroupRetentionParam, !Ref 'AWS::NoValue']
ApiGatewayBadBotCloudWatchRole:
Type: AWS::IAM::Role
- Condition: BadBotProtectionActivated
+ Condition: CreateApiGatewayBadBotCloudWatchRole
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
@@ -1952,7 +2361,7 @@ Resources:
Type: AWS::ApiGateway::Account
Condition: BadBotProtectionActivated
Properties:
- CloudWatchRoleArn: !GetAtt ApiGatewayBadBotCloudWatchRole.Arn
+ CloudWatchRoleArn: !If [CreateApiGatewayBadBotCloudWatchRole, !GetAtt ApiGatewayBadBotCloudWatchRole.Arn, !Ref ApiGatewayBadBotCWRoleParam]
DependsOn:
- ApiGatewayBadBot
@@ -1960,9 +2369,9 @@ Resources:
Type: 'AWS::Lambda::Function'
Properties:
Description: >-
- This lambda function configures the Web ACL rules based on the features enabled in the
+ This lambda function configures the Web ACL rules based on the features activated in the
CloudFormation template.
- Handler: 'custom-resource.lambda_handler'
+ Handler: 'custom_resource.lambda_handler'
Role: !GetAtt LambdaRoleCustomResource.Arn
Code:
S3Bucket: !Join ['-', [!FindInMap ["SourceCode", "General", "SourceBucket"], !Ref 'AWS::Region']]
@@ -1974,7 +2383,7 @@ Resources:
SOLUTION_ID: !FindInMap [Solution, Data, SolutionID]
METRICS_URL: !FindInMap [Solution, Data, MetricsURL]
USER_AGENT_EXTRA: !FindInMap [Solution, UserAgent, UserAgentExtra]
- Runtime: python3.8
+ Runtime: python3.10
MemorySize: 128
Timeout: 300
Metadata:
@@ -1991,6 +2400,7 @@ Resources:
Type: 'Custom::ConfigureAWSWAFLogs'
Condition: HttpFloodProtectionLogParserActivated
Properties:
+ SolutionVersion: "%VERSION%"
ServiceToken: !GetAtt CustomResource.Arn
WAFWebACLArn: !GetAtt WebACLStack.Outputs.WAFWebACLArn
DeliveryStreamArn: !GetAtt FirehoseAthenaStack.Outputs.FirehoseWAFLogsDeliveryStreamArn
@@ -2001,12 +2411,14 @@ Resources:
Properties:
ServiceToken: !GetAtt CustomResource.Arn
Region: !Ref 'AWS::Region'
+ SolutionVersion: "%VERSION%"
AppAccessLogBucket: !Ref AppAccessLogBucket
+ AppAccessLogBucketPrefix: !Ref AppAccessLogBucketPrefixParam
LogParser: !If [LogParser, !GetAtt LogParser.Arn, !Ref 'AWS::NoValue']
ScannersProbesLambdaLogParser: !If [ScannersProbesLambdaLogParser, 'yes', 'no']
ScannersProbesAthenaLogParser: !If [ScannersProbesAthenaLogParser, 'yes', 'no']
MoveS3LogsForPartition: !If [ScannersProbesAthenaLogParser, !GetAtt MoveS3LogsForPartition.Arn, !Ref 'AWS::NoValue']
- AccessLoggingBucket: !If [ScannersProbesProtectionActivated, !Ref AccessLoggingBucket, !Ref 'AWS::NoValue']
+ AccessLoggingBucket: !If [TurnOnAppAccessLogBucketLogging, !Ref AccessLoggingBucket, !Ref 'AWS::NoValue']
ConfigureWafLogBucket:
Type: 'Custom::ConfigureWafLogBucket'
@@ -2050,11 +2462,27 @@ Resources:
ActivateScannersProbesProtectionParam: !Ref ActivateScannersProbesProtectionParam
ActivateReputationListsProtectionParam: !Ref ActivateReputationListsProtectionParam
ActivateBadBotProtectionParam: !Ref ActivateBadBotProtectionParam
+ ApiGatewayBadBotCWRoleParam: !If [ApiGatewayBadBotCWRoleNotExists, 'no', 'yes']
ActivateAWSManagedRulesParam: !Ref ActivateAWSManagedRulesParam
+ ActivateAWSManagedAPParam: !Ref ActivateAWSManagedAPParam
+ ActivateAWSManagedKBIParam: !Ref ActivateAWSManagedKBIParam
+ ActivateAWSManagedIPRParam: !Ref ActivateAWSManagedIPRParam
+ ActivateAWSManagedAIPParam: !Ref ActivateAWSManagedAIPParam
+ ActivateAWSManagedSQLParam: !Ref ActivateAWSManagedSQLParam
+ ActivateAWSManagedLinuxParam: !Ref ActivateAWSManagedLinuxParam
+ ActivateAWSManagedPOSIXParam: !Ref ActivateAWSManagedPOSIXParam
+ ActivateAWSManagedWindowsParam: !Ref ActivateAWSManagedWindowsParam
+ ActivateAWSManagedPHPParam: !Ref ActivateAWSManagedPHPParam
+ ActivateAWSManagedWPParam: !Ref ActivateAWSManagedWPParam
KeepDataInOriginalS3Location: !Ref KeepDataInOriginalS3Location
IPRetentionPeriodAllowedParam: !Ref IPRetentionPeriodAllowedParam
IPRetentionPeriodDeniedParam: !Ref IPRetentionPeriodDeniedParam
SNSEmailParam: !If [SNSEmail, 'yes', 'no']
+ UserDefinedAppAccessLogBucketPrefixParam: !If [UserDefinedAppAccessLogBucketPrefix, 'yes', 'no']
+ AppAccessLogBucketLoggingStatusParam: !Ref AppAccessLogBucketLoggingStatusParam
+ RequestThresholdByCountryParam: !If [RequestThresholdByCountry, 'yes', 'no']
+ HTTPFloodAthenaQueryGroupByParam: !Ref HTTPFloodAthenaQueryGroupByParam
+ AthenaQueryRunTimeScheduleParam: !Ref AthenaQueryRunTimeScheduleParam
# AWS WAF Web ACL
WAFWebACL: !GetAtt WebACLStack.Outputs.WAFWebACL
# AWS WAF IP Sets - ID
@@ -2172,7 +2600,7 @@ Resources:
Type: AWS::SNS::Topic
Condition: SNSEmail
Properties:
- DisplayName: 'AWS WAF Security Automations IP Expiration Notification'
+ DisplayName: 'Security Automations for AWS WAF IP Expiration Notification'
TopicName: !Join ['-', ['AWS-WAF-Security-Automations-IP-Expiration-Notification', !GetAtt CreateUniqueID.UUID]]
KmsMasterKeyId: alias/aws/sns
@@ -2284,10 +2712,10 @@ Resources:
!Ref FirehoseAthenaStack
ResourceType: CFN_STACK
- DefaultApplicationAttributes:
+ DefaultApplicationAttributeGroup:
Type: AWS::ServiceCatalogAppRegistry::AttributeGroup
Properties:
- Name: !Sub '${AWS::Region}-${AWS::StackName}'
+ Name: !Sub 'AttrGrp-${AWS::Region}-${AWS::StackName}'
Description: Attribute group for solution information.
Attributes:
{ "ApplicationType" : 'AWS-Solutions',
@@ -2300,30 +2728,42 @@ Resources:
Type: AWS::ServiceCatalogAppRegistry::AttributeGroupAssociation
Properties:
Application: !GetAtt Application.Id
- AttributeGroup: !GetAtt DefaultApplicationAttributes.Id
+ AttributeGroup: !GetAtt DefaultApplicationAttributeGroup.Id
Outputs:
BadBotHoneypotEndpoint:
Description: Bad Bot Honeypot Endpoint
- Value: !Sub 'https://${ApiGatewayBadBot}.execute-api.${AWS::Region}.amazonaws.com/${ApiGatewayBadBotStage}'
+ Value: !Sub 'https://${ApiGatewayBadBot}.execute-api.${AWS::Region}.${AWS::URLSuffix}/${ApiGatewayBadBotStage}'
Condition: BadBotProtectionActivated
+ Export:
+ Name: !Sub "${AWS::StackName}-BadBotHoneypotEndpoint"
WAFWebACL:
Description: AWS WAF WebACL
Value: !GetAtt WebACLStack.Outputs.WAFWebACL
+ Export:
+ Name: !Sub "${AWS::StackName}-WAFWebACL"
WAFWebACLArn:
Description: AWS WAF WebACL Arn
Value: !GetAtt WebACLStack.Outputs.WAFWebACLArn
+ Export:
+ Name: !Sub "${AWS::StackName}-WAFWebACLArn"
WafLogBucket:
Value: !Ref WafLogBucket
Condition: HttpFloodProtectionLogParserActivated
+ Export:
+ Name: !Sub "${AWS::StackName}-WafLogBucket"
AppAccessLogBucket:
Value: !Ref AppAccessLogBucket
Condition: ScannersProbesProtectionActivated
+ Export:
+ Name: !Sub "${AWS::StackName}-AppAccessLogBucket"
SolutionVersion:
Description: Solution Version Number
- Value: "%VERSION%"
\ No newline at end of file
+ Value: "%VERSION%"
+ Export:
+ Name: !Sub "${AWS::StackName}-SolutionVersion"
\ No newline at end of file
diff --git a/deployment/build-s3-dist.sh b/deployment/build-s3-dist.sh
index 70eb049b..95988a0d 100755
--- a/deployment/build-s3-dist.sh
+++ b/deployment/build-s3-dist.sh
@@ -88,9 +88,9 @@ cd "$source_dir"/log_parser/package || exit 1
zip -q -r9 "$build_dist_dir"/log_parser.zip .
cd "$source_dir"/log_parser || exit 1
mkdir -p lib
-echo "cp $source_dir/lib/waflibv2.py $source_dir/lib/solution_metrics.py $source_dir/lib/boto3_util.py lib"
-cp -rf "$source_dir"/lib/waflibv2.py "$source_dir"/lib/solution_metrics.py "$source_dir"/lib/boto3_util.py lib
-zip -g -r "$build_dist_dir"/log_parser.zip log-parser.py partition_s3_logs.py add_athena_partitions.py build_athena_queries.py lib test
+echo "cp $source_dir/lib/waflibv2.py $source_dir/lib/solution_metrics.py $source_dir/lib/boto3_util.py $source_dir/lib/cw_metrics_util.py $source_dir/lib/logging_util.py $source_dir/lib/s3_util.py lib"
+cp -rf "$source_dir"/lib/waflibv2.py "$source_dir"/lib/solution_metrics.py "$source_dir"/lib/boto3_util.py "$source_dir"/lib/cw_metrics_util.py "$source_dir"/lib/logging_util.py "$source_dir"/lib/s3_util.py lib
+zip -g -r "$build_dist_dir"/log_parser.zip log_parser.py partition_s3_logs.py add_athena_partitions.py build_athena_queries.py lambda_log_parser.py athena_log_parser.py lib test
echo "------------------------------------------------------------------------------"
@@ -102,9 +102,9 @@ cd "$source_dir"/access_handler/package || exit 1
zip -q -r9 "$build_dist_dir"/access_handler.zip .
cd "$source_dir"/access_handler || exit 1
mkdir -p lib
-echo "cp $source_dir/lib/waflibv2.py $source_dir/lib/solution_metrics.py $source_dir/lib/boto3_util.py lib"
-cp -rf "$source_dir"/lib/waflibv2.py "$source_dir"/lib/solution_metrics.py "$source_dir"/lib/boto3_util.py lib
-zip -g -r "$build_dist_dir"/access_handler.zip access-handler.py lib
+echo "cp $source_dir/lib/waflibv2.py $source_dir/lib/solution_metrics.py $source_dir/lib/boto3_util.py $source_dir/lib/cw_metrics_util.py $source_dir/lib/logging_util.py lib"
+cp -rf "$source_dir"/lib/waflibv2.py "$source_dir"/lib/solution_metrics.py "$source_dir"/lib/boto3_util.py "$source_dir"/lib/cw_metrics_util.py "$source_dir"/lib/logging_util.py lib
+zip -g -r "$build_dist_dir"/access_handler.zip access_handler.py lib
echo "------------------------------------------------------------------------------"
@@ -116,9 +116,9 @@ cd "$source_dir"/reputation_lists_parser/package || exit 1
zip -q -r9 "$build_dist_dir"/reputation_lists_parser.zip .
cd "$source_dir"/reputation_lists_parser || exit 1
mkdir -p lib
-echo "cp $source_dir/lib/waflibv2.py $source_dir/lib/solution_metrics.py $source_dir/lib/boto3_util.py lib"
-cp -rf "$source_dir"/lib/waflibv2.py "$source_dir"/lib/solution_metrics.py "$source_dir"/lib/boto3_util.py lib
-zip -g -r "$build_dist_dir"/reputation_lists_parser.zip reputation-lists.py lib
+echo "cp $source_dir/lib/waflibv2.py $source_dir/lib/solution_metrics.py $source_dir/lib/boto3_util.py $source_dir/lib/cfn_response.py $source_dir/lib/cw_metrics_util.py $source_dir/lib/logging_util.py lib"
+cp -rf "$source_dir"/lib/waflibv2.py "$source_dir"/lib/solution_metrics.py "$source_dir"/lib/boto3_util.py "$source_dir"/lib/cfn_response.py "$source_dir"/lib/cw_metrics_util.py "$source_dir"/lib/logging_util.py lib
+zip -g -r "$build_dist_dir"/reputation_lists_parser.zip reputation_lists.py lib
echo "------------------------------------------------------------------------------"
@@ -130,9 +130,9 @@ cd "$source_dir"/custom_resource/package || exit 1
zip -q -r9 "$build_dist_dir"/custom_resource.zip .
cd "$source_dir"/custom_resource || exit 1
mkdir -p lib
-echo "cp $source_dir/lib/waflibv2.py $source_dir/lib/solution_metrics.py $source_dir/lib/boto3_util.py lib"
-cp -rf "$source_dir"/lib/waflibv2.py "$source_dir"/lib/solution_metrics.py "$source_dir"/lib/boto3_util.py lib
-zip -g -r "$build_dist_dir"/custom_resource.zip custom-resource.py lib
+echo "cp $source_dir/lib/waflibv2.py $source_dir/lib/solution_metrics.py $source_dir/lib/boto3_util.py $source_dir/lib/s3_util.py $source_dir/lib/cfn_response.py $source_dir/lib/logging_util.py lib"
+cp -rf "$source_dir"/lib/waflibv2.py "$source_dir"/lib/solution_metrics.py "$source_dir"/lib/boto3_util.py "$source_dir"/lib/s3_util.py "$source_dir"/lib/cfn_response.py "$source_dir"/lib/logging_util.py lib
+zip -g -r "$build_dist_dir"/custom_resource.zip custom_resource.py resource_manager.py log_group_retention.py lib
echo "------------------------------------------------------------------------------"
@@ -144,9 +144,9 @@ cd "$source_dir"/helper/package || exit 1
zip -q -r9 "$build_dist_dir"/helper.zip ./*
cd "$source_dir"/helper || exit 1
mkdir -p lib
-echo "cp $source_dir/lib/waflibv2.py $source_dir/lib/boto3_util.py lib"
-cp -rf "$source_dir"/lib/waflibv2.py "$source_dir"/lib/boto3_util.py lib
-zip -g -r "$build_dist_dir"/helper.zip helper.py lib
+echo "cp $source_dir/lib/waflibv2.py $source_dir/lib/boto3_util.py $source_dir/lib/s3_util.py $source_dir/lib/cfn_response.py $source_dir/lib/logging_util.py lib"
+cp -rf "$source_dir"/lib/waflibv2.py "$source_dir"/lib/boto3_util.py "$source_dir"/lib/s3_util.py "$source_dir"/lib/cfn_response.py "$source_dir"/lib/logging_util.py lib
+zip -g -r "$build_dist_dir"/helper.zip helper.py stack_requirements.py lib
echo "------------------------------------------------------------------------------"
@@ -157,7 +157,12 @@ pip3 install -r requirements.txt --target ./package
cd "$source_dir"/timer/package || exit 1
zip -q -r9 "$build_dist_dir"/timer.zip ./*
cd "$source_dir"/timer || exit 1
-zip -g -r "$build_dist_dir"/timer.zip timer.py
+mkdir -p lib
+echo "cp $source_dir/lib/cfn_response.py lib"
+cp -rf "$source_dir"/lib/cfn_response.py lib
+echo "cp $source_dir/lib/logging_util.py lib"
+cp -rf "$source_dir"/lib/logging_util.py lib
+zip -g -r "$build_dist_dir"/timer.zip timer.py lib
echo "------------------------------------------------------------------------------"
@@ -169,6 +174,6 @@ cd "$source_dir"/ip_retention_handler/package || exit 1
zip -q -r9 "$build_dist_dir"/ip_retention_handler.zip ./*
cd "$source_dir"/ip_retention_handler || exit 1
mkdir -p lib
-echo "cp $source_dir/lib/waflibv2.py $source_dir/lib/solution_metrics.py $source_dir/lib/sns_util.py $source_dir/lib/dynamodb_util.py $source_dir/lib/boto3_util.py lib"
-cp -rf "$source_dir"/lib/waflibv2.py "$source_dir"/lib/solution_metrics.py "$source_dir"/lib/sns_util.py "$source_dir"/lib/dynamodb_util.py $source_dir/lib/boto3_util.py lib
+echo "cp $source_dir/lib/waflibv2.py $source_dir/lib/solution_metrics.py $source_dir/lib/sns_util.py $source_dir/lib/dynamodb_util.py $source_dir/lib/boto3_util.py $source_dir/lib/logging_util.py lib"
+cp -rf "$source_dir"/lib/waflibv2.py "$source_dir"/lib/solution_metrics.py "$source_dir"/lib/sns_util.py "$source_dir"/lib/dynamodb_util.py $source_dir/lib/boto3_util.py "$source_dir"/lib/logging_util.py lib
zip -g -r "$build_dist_dir"/ip_retention_handler.zip set_ip_retention.py remove_expired_ip.py lib test
\ No newline at end of file
diff --git a/deployment/run-unit-tests.sh b/deployment/run-unit-tests.sh
index c4cedc40..351b582b 100755
--- a/deployment/run-unit-tests.sh
+++ b/deployment/run-unit-tests.sh
@@ -7,12 +7,31 @@
# ./run-unit-tests.sh
#
+[ "$DEBUG" == 'true' ] && set -x
+set -e
+
template_dir="$PWD"
source_dir="$(cd $template_dir/../source; pwd -P)"
echo "Current directory: $template_dir"
echo "Source directory: $source_dir"
+setup_python_env() {
+ if [ -d "./.venv-test" ]; then
+ echo "Reusing already setup python venv in ./.venv-test. Delete ./.venv-test if you want a fresh one created."
+ return
+ fi
+ echo "Setting up python venv"
+ python3 -m venv .venv-test
+ echo "Initiating virtual environment"
+ source .venv-test/bin/activate
+ echo "Installing python packages"
+ pip3 install -r requirements.txt --target .
+ pip3 install -r requirements_dev.txt
+ echo "deactivate virtual environment"
+ deactivate
+}
+
run_python_lambda_test() {
lambda_name=$1
lambda_description=$2
@@ -23,9 +42,12 @@ run_python_lambda_test() {
cd $source_dir/$lambda_name
echo "run_python_lambda_test: Current directory: $source_dir/$lambda_name"
- # Install dependencies
- echo 'Install Python Testing Dependencies: pip3 install -r ./testing_requirements.txt'
- pip3 install -r ./testing_requirements.txt
+ [ "${CLEAN:-true}" = "true" ] && rm -fr .venv-test
+
+ setup_python_env
+
+ echo "Initiating virtual environment"
+ source .venv-test/bin/activate
# Set coverage report path
mkdir -p $source_dir/test/coverage-reports
@@ -34,16 +56,37 @@ run_python_lambda_test() {
# Run unit tests with coverage
python3 -m pytest --cov --cov-report=term-missing --cov-report "xml:$coverage_report_path"
+
+ if [ "$?" = "1" ]; then
+ echo "(deployment/run-unit-tests.sh) ERROR: there is likely output above." 1>&2
+ exit 1
+ fi
+
# The pytest --cov with its parameters and .coveragerc generates a xml cov-report with `coverage/sources` list
# with absolute path for the source directories. To avoid dependencies of tools (such as SonarQube) on different
# absolute paths for source directories, this substitution is used to convert each absolute source directory
# path to the corresponding project relative path. The $source_dir holds the absolute path for source directory.
sed -i -e "s,$source_dir,source,g" $coverage_report_path
+ echo "deactivate virtual environment"
+ deactivate
+
+ if [ "${CLEAN:-true}" = "true" ]; then
+ rm -fr .venv-test
+ # Note: leaving $source_dir/test/coverage-reports to allow further processing of coverage reports
+ rm -fr coverage
+ rm .coverage
+ fi
}
# Run Python unit tests
+run_python_lambda_test access_handler "BadBot Access Handler Lambda"
+run_python_lambda_test custom_resource "Custom Resource Lambda"
+run_python_lambda_test helper "Helper Lambda"
run_python_lambda_test ip_retention_handler "Set IP Retention Lambda"
-run_python_lambda_test log_parser "Log Parser"
+run_python_lambda_test log_parser "Log Parser Lambda"
+run_python_lambda_test reputation_lists_parser "Reputation List Parser Lambda"
+run_python_lambda_test timer "Timer Lambda"
+
# Return to the directory where we started
cd $template_dir
\ No newline at end of file
diff --git a/source/access_handler/.coveragerc b/source/access_handler/.coveragerc
new file mode 100644
index 00000000..3aa79036
--- /dev/null
+++ b/source/access_handler/.coveragerc
@@ -0,0 +1,29 @@
+[run]
+omit =
+ test/*
+ */__init__.py
+ **/__init__.py
+ backoff/*
+ bin/*
+ boto3/*
+ botocore/*
+ certifi/*
+ charset*/*
+ crhelper*
+ chardet*
+ dateutil/*
+ idna/*
+ jmespath/*
+ lib/*
+ package*
+ python_*
+ requests/*
+ s3transfer/*
+ six*
+ tenacity*
+ tests
+ urllib3/*
+ yaml
+ PyYAML-*
+source =
+ .
\ No newline at end of file
diff --git a/source/access_handler/__init__.py b/source/access_handler/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/source/access_handler/access-handler.py b/source/access_handler/access-handler.py
deleted file mode 100644
index 4ebaa234..00000000
--- a/source/access_handler/access-handler.py
+++ /dev/null
@@ -1,262 +0,0 @@
-######################################################################################################################
-# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. #
-# #
-# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
-# with the License. A copy of the License is located at #
-# #
-# http://www.apache.org/licenses/LICENSE-2.0 #
-# #
-# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
-# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
-# and limitations under the License. #
-######################################################################################################################
-
-import requests
-import boto3
-import json
-import logging
-import math
-import time
-import datetime
-import os
-from ipaddress import ip_address
-from ipaddress import ip_network
-from ipaddress import IPv4Network
-from ipaddress import IPv6Network
-from os import environ
-from botocore.config import Config
-
-from lib.waflibv2 import WAFLIBv2
-from lib.solution_metrics import send_metrics
-from lib.boto3_util import create_client
-
-waflib = WAFLIBv2()
-
-
-def send_anonymous_usage_data(log, scope, ipset_name_v4, ipset_arn_v4, ipset_name_v6, ipset_arn_v6):
- try:
- if 'SEND_ANONYMOUS_USAGE_DATA' not in environ or os.getenv('SEND_ANONYMOUS_USAGE_DATA').lower() != 'yes':
- return
-
- log.info("[send_anonymous_usage_data] Start")
- metric_prefix = os.getenv('METRIC_NAME_PREFIX')
-
- cw = create_client('cloudwatch')
- usage_data = {
- "data_type": "bad_bot",
- "bad_bot_ip_set_size": 0,
- "allowed_requests": 0,
- "blocked_requests_all": 0,
- "blocked_requests_bad_bot": 0,
- "waf_type": os.getenv('LOG_TYPE')
- }
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[send_anonymous_usage_data] Get num allowed requests")
- # --------------------------------------------------------------------------------------------------------------
- try:
- response = cw.get_metric_statistics(
- MetricName='AllowedRequests',
- Namespace='AWS/WAFV2',
- Statistics=['Sum'],
- Period=12 * 3600,
- StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=12 * 3600),
- EndTime=datetime.datetime.utcnow(),
- Dimensions=[
- {
- "Name": "Rule",
- "Value": "ALL"
- },
- {
- "Name": "WebACL",
- "Value": os.getenv('STACK_NAME')
- },
- {
- "Name": "Region",
- "Value": os.getenv('AWS_REGION')
- }
- ]
- )
- if len(response['Datapoints']) > 0:
- usage_data['allowed_requests'] = response['Datapoints'][0]['Sum']
-
- except Exception as error:
- log.info("[send_anonymous_usage_data] Failed to get Num Allowed Requests")
- log.error(str(error))
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[send_anonymous_usage_data] Get num blocked requests - all rules")
- # --------------------------------------------------------------------------------------------------------------
- try:
- response = cw.get_metric_statistics(
- MetricName='BlockedRequests',
- Namespace='AWS/WAFV2',
- Statistics=['Sum'],
- Period=12 * 3600,
- StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=12 * 3600),
- EndTime=datetime.datetime.utcnow(),
- Dimensions=[
- {
- "Name": "Rule",
- "Value": "ALL"
- },
- {
- "Name": "WebACL",
- "Value": os.getenv('STACK_NAME')
- },
- {
- "Name": "Region",
- "Value": os.getenv('AWS_REGION')
- }
- ]
- )
- if len(response['Datapoints']) > 0:
- usage_data['blocked_requests_all'] = response['Datapoints'][0]['Sum']
-
- except Exception as error:
- log.info("[send_anonymous_usage_data] Failed to get num blocked requests - all rules")
- log.error(str(error))
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[send_anonymous_usage_data] Get bad bot data")
- # --------------------------------------------------------------------------------------------------------------
- if 'IP_SET_ID_BAD_BOTV4' in environ or 'IP_SET_ID_BAD_BOTV6' in environ:
- try:
- countv4 = 0
- response = waflib.get_ip_set(log, scope, ipset_name_v4, ipset_arn_v4)
- log.info(response)
- if response is not None:
- countv4 = len(response['IPSet']['Addresses'])
- log.info("Bad Bot CountV4 %s", countv4)
-
- countv6 = 0
- response = waflib.get_ip_set(log, scope, ipset_name_v6, ipset_arn_v6)
- log.info(response)
- if response is not None:
- countv6 = len(response['IPSet']['Addresses'])
- log.info("Bad Bot CountV6 %s", countv6)
-
- usage_data['bad_bot_ip_set_size'] = str(countv4 + countv6)
-
- response = cw.get_metric_statistics(
- MetricName='BlockedRequests',
- Namespace='AWS/WAFV2',
- Statistics=['Sum'],
- Period=12 * 3600,
- StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=12 * 3600),
- EndTime=datetime.datetime.utcnow(),
- Dimensions=[
- {
- "Name": "Rule",
- "Value": metric_prefix + 'BadBotRule'
- },
- {
- "Name": "WebACL",
- "Value": os.getenv('STACK_NAME')
- },
- {
- "Name": "Region",
- "Value": os.getenv('AWS_REGION')
- }
- ]
- )
- if len(response['Datapoints']) > 0:
- usage_data['blocked_requests_bad_bot'] = response['Datapoints'][0]['Sum']
-
- except Exception as error:
- log.info("[send_anonymous_usage_data] Failed to get bad bot data")
- log.error(str(error))
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[send_anonymous_usage_data] Send Data")
- # --------------------------------------------------------------------------------------------------------------
- response = send_metrics(data=usage_data)
- response_code = response.status_code
- log.info('[send_anonymous_usage_data] Response Code: {}'.format(response_code))
- log.info("[send_anonymous_usage_data] End")
-
- except Exception as error:
- log.info("[send_anonymous_usage_data] Failed to Send Data")
- log.error(str(error))
-
-
-# ======================================================================================================================
-# Lambda Entry Point
-# ======================================================================================================================
-def lambda_handler(event, context):
- log = logging.getLogger()
- log.info('[lambda_handler] Start')
- log_level = str(os.getenv('LOG_LEVEL').upper())
- if log_level not in ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']:
- log_level = 'ERROR'
- log.setLevel(log_level)
-
- # ----------------------------------------------------------
- # Read inputs parameters
- # ----------------------------------------------------------
- try:
- scope = os.getenv('SCOPE')
- ipset_name_v4 = os.getenv('IP_SET_NAME_BAD_BOTV4')
- ipset_name_v6 = os.getenv('IP_SET_NAME_BAD_BOTV6')
- ipset_arn_v4 = os.getenv('IP_SET_ID_BAD_BOTV4')
- ipset_arn_v6 = os.getenv('IP_SET_ID_BAD_BOTV6')
-
- # Fixed as old line had security exposure based on user supplied IP address
- log.info("Event->%s<-", str(event))
- if event['requestContext']['identity']['userAgent'] == 'Amazon CloudFront':
- source_ip = str(event['headers']['X-Forwarded-For'].split(',')[0].strip())
- else:
- source_ip = str(event['requestContext']['identity']['sourceIp'])
-
- log.info("scope = %s", scope)
- log.info("ipset_name_v4 = %s", ipset_name_v4)
- log.info("ipset_name_v6 = %s", ipset_name_v6)
- log.info("IPARNV4 = %s", ipset_arn_v4)
- log.info("IPARNV6 = %s", ipset_arn_v6)
- log.info("source_ip = %s", source_ip)
- except Exception as e:
- log.error(e)
- raise
-
- new_address = []
- output = None
- try:
- ip_type = "IPV%s" % ip_address(source_ip).version
- if ip_type == "IPV4":
- new_address.append(IPv4Network(source_ip).with_prefixlen)
- ipset = waflib.get_ip_set(log, scope, ipset_name_v4, ipset_arn_v4)
- # merge old addresses with this one
- log.info(ipset)
- current_list = ipset["IPSet"]["Addresses"]
- log.info(current_list)
- new_list = list(set(current_list) | set(new_address))
- log.info(new_list)
- output = waflib.update_ip_set(log, scope, ipset_name_v4, ipset_arn_v4, new_list)
- elif ip_type == "IPV6":
- new_address.append(IPv6Network(source_ip).with_prefixlen)
- ipset = waflib.get_ip_set(log, scope, ipset_name_v6, ipset_arn_v6)
-
- # merge old addresses with this one
- log.info(ipset)
- current_list = ipset["IPSet"]["Addresses"]
- log.info(current_list)
- new_list = list(set(current_list) | set(new_address))
- log.info(new_list)
- output = waflib.update_ip_set(log, scope, ipset_name_v6, ipset_arn_v6, new_list)
- except Exception as e:
- log.error(e)
- raise
- finally:
- log.info("Output->%s<-", output)
- message = "message: [%s] Thanks for the visit." % source_ip
- response = {
- 'statusCode': 200,
- 'headers': {'Content-Type': 'application/json'},
- 'body': message
- }
-
- if output is not None:
- send_anonymous_usage_data(log, scope, ipset_name_v4, ipset_arn_v4, ipset_name_v6, ipset_arn_v6)
- log.info('[lambda_handler] End')
-
- return response
diff --git a/source/access_handler/access_handler.py b/source/access_handler/access_handler.py
new file mode 100644
index 00000000..7620983a
--- /dev/null
+++ b/source/access_handler/access_handler.py
@@ -0,0 +1,182 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
+import os
+from ipaddress import ip_address
+from ipaddress import IPv4Network
+from ipaddress import IPv6Network
+from os import environ
+from lib.waflibv2 import WAFLIBv2
+from lib.solution_metrics import send_metrics
+from lib.cw_metrics_util import WAFCloudWatchMetrics
+from lib.logging_util import set_log_level
+
+waflib = WAFLIBv2()
+CW_METRIC_PERIOD_SECONDS = 12 * 3600 # Twelve hours in seconds
+
+def initialize_usage_data():
+ usage_data = {
+ "data_type": "bad_bot",
+ "bad_bot_ip_set_size": 0,
+ "allowed_requests": 0,
+ "blocked_requests_all": 0,
+ "blocked_requests_bad_bot": 0,
+ "waf_type": os.getenv('LOG_TYPE'),
+ "provisioner": os.getenv('provisioner') if "provisioner" in environ else "cfn"
+
+ }
+ return usage_data
+
+
+def get_bad_bot_usage_data(log, scope, cw, ipset_name_v4, ipset_arn_v4, ipset_name_v6, ipset_arn_v6, usage_data):
+ log.info("[get_bad_bot_usage_data] Get bad bot data")
+
+ if 'IP_SET_ID_BAD_BOTV4' in environ or 'IP_SET_ID_BAD_BOTV6' in environ:
+ # Get the count of ipv4 and ipv6 in bad bot ip sets
+ ipv4_count = waflib.get_ip_address_count(log, scope, ipset_name_v4, ipset_arn_v4)
+ ipv6_count = waflib.get_ip_address_count(log, scope, ipset_name_v6, ipset_arn_v6)
+ usage_data['bad_bot_ip_set_size'] = str(ipv4_count + ipv6_count)
+
+ # Get the count of blocked requests for the bad bot rule from cloudwatch metrics
+ usage_data = cw.add_waf_cw_metric_to_usage_data(
+ 'BlockedRequests',
+ CW_METRIC_PERIOD_SECONDS,
+ os.getenv('METRIC_NAME_PREFIX') + 'BadBotRule',
+ usage_data,
+ 'blocked_requests_bad_bot',
+ 0
+ )
+ return usage_data
+
+
+def send_anonymous_usage_data(log, scope, ipset_name_v4, ipset_arn_v4, ipset_name_v6, ipset_arn_v6):
+ try:
+ if 'SEND_ANONYMOUS_USAGE_DATA' not in environ or os.getenv('SEND_ANONYMOUS_USAGE_DATA').lower() != 'yes':
+ return
+
+ log.info("[send_anonymous_usage_data] Start")
+
+ cw = WAFCloudWatchMetrics(log)
+ usage_data = initialize_usage_data()
+
+ # Get the count of allowed requests for all the waf rules from cloudwatch metrics
+ usage_data = cw.add_waf_cw_metric_to_usage_data(
+ 'AllowedRequests',
+ CW_METRIC_PERIOD_SECONDS,
+ 'ALL',
+ usage_data,
+ 'allowed_requests',
+ 0
+ )
+
+ # Get the count of blocked requests for all the waf rules from cloudwatch metrics
+ usage_data = cw.add_waf_cw_metric_to_usage_data(
+ 'BlockedRequests',
+ CW_METRIC_PERIOD_SECONDS,
+ 'ALL',
+ usage_data,
+ 'blocked_requests_all',
+ 0
+ )
+
+ # Get bad bot specific usage data
+ usage_data = get_bad_bot_usage_data(log, scope, cw, ipset_name_v4, ipset_arn_v4,
+ ipset_name_v6, ipset_arn_v6, usage_data)
+
+ # Send usage data
+ log.info('[send_anonymous_usage_data] Send usage data: \n{}'.format(usage_data))
+ response = send_metrics(data=usage_data)
+ response_code = response.status_code
+ log.info('[send_anonymous_usage_data] Response Code: {}'.format(response_code))
+ log.info("[send_anonymous_usage_data] End")
+
+ except Exception as error:
+ log.info("[send_anonymous_usage_data] Failed to Send Data")
+ log.error(str(error))
+
+
+def add_ip_to_ip_set(log, scope, ip_type, source_ip, ipset_name, ipset_arn):
+ new_address = []
+ output = None
+
+ if ip_type == "IPV4":
+ new_address.append(IPv4Network(source_ip).with_prefixlen)
+ elif ip_type == "IPV6":
+ new_address.append(IPv6Network(source_ip).with_prefixlen)
+
+ ipset = waflib.get_ip_set(log, scope, ipset_name, ipset_arn)
+ # merge old addresses with this one
+ log.info(ipset)
+ current_list = ipset["IPSet"]["Addresses"]
+ log.info(current_list)
+ new_list = list(set(current_list) | set(new_address))
+ log.info(new_list)
+ output = waflib.update_ip_set(log, scope, ipset_name, ipset_arn, new_list)
+
+ return output
+
+
+# ======================================================================================================================
+# Lambda Entry Point
+# ======================================================================================================================
+def lambda_handler(event, _):
+ log = set_log_level()
+ log.info('[lambda_handler] Start')
+
+ # ----------------------------------------------------------
+ # Read inputs parameters
+ # ----------------------------------------------------------
+ try:
+ scope = os.getenv('SCOPE')
+ ipset_name_v4 = os.getenv('IP_SET_NAME_BAD_BOTV4')
+ ipset_name_v6 = os.getenv('IP_SET_NAME_BAD_BOTV6')
+ ipset_arn_v4 = os.getenv('IP_SET_ID_BAD_BOTV4')
+ ipset_arn_v6 = os.getenv('IP_SET_ID_BAD_BOTV6')
+
+ # Fixed as old line had security exposure based on user supplied IP address
+ log.info("Event->%s<-", str(event))
+ if event['requestContext']['identity']['userAgent'] == 'Amazon CloudFront':
+ source_ip = str(event['headers']['X-Forwarded-For'].split(',')[0].strip())
+ else:
+ source_ip = str(event['requestContext']['identity']['sourceIp'])
+
+ log.info("scope = %s", scope)
+ log.info("ipset_name_v4 = %s", ipset_name_v4)
+ log.info("ipset_name_v6 = %s", ipset_name_v6)
+ log.info("IPARNV4 = %s", ipset_arn_v4)
+ log.info("IPARNV6 = %s", ipset_arn_v6)
+ log.info("source_ip = %s", source_ip)
+
+ ip_type = "IPV%s" % ip_address(source_ip).version
+ output = None
+ if ip_type == "IPV4":
+ output = add_ip_to_ip_set(log, scope, ip_type, source_ip, ipset_name_v4, ipset_arn_v4)
+ elif ip_type == "IPV6":
+ output = add_ip_to_ip_set(log, scope, ip_type, source_ip, ipset_name_v6, ipset_arn_v6)
+ except Exception as e:
+ log.error(e)
+ raise
+ finally:
+ log.info("Output->%s<-", output)
+ message = "message: [%s] Thanks for the visit." % source_ip
+ response = {
+ 'statusCode': 200,
+ 'headers': {'Content-Type': 'application/json'},
+ 'body': message
+ }
+
+ if output is not None:
+ send_anonymous_usage_data(log, scope, ipset_name_v4, ipset_arn_v4, ipset_name_v6, ipset_arn_v6)
+ log.info('[lambda_handler] End')
+
+ return response
diff --git a/source/access_handler/requirements.txt b/source/access_handler/requirements.txt
index 511213cc..635b9d03 100644
--- a/source/access_handler/requirements.txt
+++ b/source/access_handler/requirements.txt
@@ -1,2 +1,2 @@
-requests>=2.28.2
-backoff>=2.2.1
\ No newline at end of file
+requests~=2.28.2
+backoff~=2.2.1
\ No newline at end of file
diff --git a/source/access_handler/requirements_dev.txt b/source/access_handler/requirements_dev.txt
new file mode 100644
index 00000000..1f9e6301
--- /dev/null
+++ b/source/access_handler/requirements_dev.txt
@@ -0,0 +1,10 @@
+botocore~=1.29.85
+boto3~=1.26.85
+mock~=5.0.1
+moto~=4.1.4
+pytest~=7.2.2
+pytest-mock~=3.10.0
+pytest-runner~=6.0.0
+freezegun~=1.2.2
+pytest-cov~=4.0.0
+pytest-env~=0.8.1
\ No newline at end of file
diff --git a/source/access_handler/test/__init__.py b/source/access_handler/test/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/source/access_handler/test/conftest.py b/source/access_handler/test/conftest.py
new file mode 100644
index 00000000..58020ad9
--- /dev/null
+++ b/source/access_handler/test/conftest.py
@@ -0,0 +1,142 @@
+##############################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). #
+# You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is #
+# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY #
+# KIND, express or implied. See the License for the specific language #
+# governing permissions and limitations under the License. #
+##############################################################################
+
+import pytest
+import boto3
+from os import environ
+from moto import (
+ mock_wafv2,
+ mock_cloudwatch
+)
+
+@pytest.fixture(scope='module', autouse=True)
+def aws_credentials():
+ """Mocked AWS Credentials for moto"""
+ environ['AWS_ACCESS_KEY_ID'] = 'testing'
+ environ['AWS_SECRET_ACCESS_KEY'] = 'testing'
+ environ['AWS_SECURITY_TOKEN'] = 'testing'
+ environ['AWS_SESSION_TOKEN'] = 'testing'
+ environ['AWS_DEFAULT_REGION'] = 'us-east-1'
+ environ['AWS_REGION'] = 'us-east-1'
+
+@pytest.fixture(scope='session')
+def ipset_env_var_setup():
+ environ["SCOPE"] = 'ALB'
+ environ['IP_SET_NAME_BAD_BOTV4'] = 'IP_SET_NAME_BAD_BOTV4'
+ environ['IP_SET_NAME_BAD_BOTV6'] = 'IP_SET_NAME_BAD_BOTV6'
+ environ["IP_SET_ID_BAD_BOTV4"] = 'IP_SET_ID_BAD_BOTV4'
+ environ['IP_SET_ID_BAD_BOTV6'] = 'IP_SET_ID_BAD_BOTV6'
+
+@pytest.fixture(scope="session")
+def wafv2_client():
+ with mock_wafv2():
+ wafv2_client = boto3.client('wafv2')
+ yield wafv2_client
+
+@pytest.fixture(scope="session")
+def cloudwatch_client():
+ with mock_cloudwatch():
+ cloudwatch_client = boto3.client('cloudwatch')
+ yield cloudwatch_client
+
+@pytest.fixture(scope="session")
+def expected_exception_access_handler_error():
+ return "'NoneType' object is not subscriptable"
+
+@pytest.fixture(scope="session")
+def expected_cw_resp():
+ return None
+
+@pytest.fixture(scope="session")
+def badbot_event():
+ return {
+ 'body': None,
+ 'headers': {
+ 'Host': '0xxxx0xx0.execute-api.us-east-2.amazonaws.com',
+ 'Referer': 'https://us-east-2.console.aws.amazon.com/',
+ },
+ 'httpMethod': 'GET',
+ 'isBase64Encoded': False,
+ 'multiValueHeaders': {
+ 'Accept': [ 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8'],
+ 'Accept-Encoding': ['gzip, deflate, br'],
+ 'Accept-Language': ['en-US,en;q=0.5'],
+ 'CloudFront-Forwarded-Proto': ['https'],
+ 'CloudFront-Is-Desktop-Viewer': ['true'],
+ 'CloudFront-Is-Mobile-Viewer': ['false'],
+ 'CloudFront-Is-SmartTV-Viewer': ['false'],
+ 'CloudFront-Is-Tablet-Viewer': ['false'],
+ 'CloudFront-Viewer-ASN': ['16509'],
+ 'CloudFront-Viewer-Country': ['US'],
+ 'Host': [ '0xxxx0xx0.execute-api.us-east-2.amazonaws.com'],
+ 'Referer': [ 'https://us-east-2.console.aws.amazon.com/'],
+ 'User-Agent': [ 'Mozilla/5.0 (Macintosh; Intel '
+ 'Mac OS X 10.15; rv:102.0) '
+ 'Gecko/20100101 Firefox/102.0'],
+ 'Via': [ '2.0 '
+ 'fde752a2d4e95c2353cf5fc17ef7bf2a.cloudfront.net '
+ '(CloudFront)'],
+ 'X-Amz-Cf-Id': [ 'eee9ZGRfH0AhZToSkR1ubIekS_uz5ZoaJRvYCg6cMrBnF090iUyIQg=='],
+ 'X-Amzn-Trace-Id': [ 'Root=1-61196a2b-1c401acb6e744c82255d9844'],
+ 'X-Forwarded-For': ['99.99.99.99, 99.99.99.99'],
+ 'X-Forwarded-Port': ['443'],
+ 'X-Forwarded-Proto': ['https'],
+ 'sec-fetch-dest': ['document'],
+ 'sec-fetch-mode': ['navigate'],
+ 'sec-fetch-site': ['cross-site'],
+ 'sec-fetch-user': ['?1'],
+ 'upgrade-insecure-requests': ['1']
+ },
+ 'multiValueQueryStringParameters': None,
+ 'path': '/',
+ 'pathParameters': None,
+ 'queryStringParameters': None,
+ 'requestContext': {
+ 'accountId': 'xxxxxxxxxxxx',
+ 'apiId': '0xxxx0xx0',
+ 'domainName': '0xxxx0xx0.execute-api.us-east-2.amazonaws.com',
+ 'domainPrefix': '0xxxx0xx0',
+ 'extendedRequestId': 'D_2GyFwDiYcFofg=',
+ 'httpMethod': 'GET',
+ 'identity': {
+ 'accessKey': None,
+ 'accountId': None,
+ 'caller': None,
+ 'cognitoAuthenticationProvider': None,
+ 'cognitoAuthenticationType': None,
+ 'cognitoIdentityId': None,
+ 'cognitoIdentityPoolId': None,
+ 'principalOrgId': None,
+ 'sourceIp': '99.99.99.99',
+ 'user': None,
+ 'userAgent': 'Mozilla/5.0 '
+ '(Macintosh; Intel Mac '
+ 'OS X 10.15; rv:102.0) '
+ 'Gecko/20100101 '
+ 'Firefox/102.0',
+ 'userArn': None
+ },
+ 'path': '/ProdStage',
+ 'protocol': 'HTTP/1.1',
+ 'requestId': '4375792d-c6d0-4f84-8a40-d52f5d18dedd',
+ 'requestTime': '26/Apr/2023:18:15:07 +0000',
+ 'requestTimeEpoch': 1682532907129,
+ 'resourceId': 'yw40vqjfia',
+ 'resourcePath': '/',
+ 'stage': 'ProdStage'
+ },
+ 'resource': '/',
+ 'stageVariables': None
+ }
\ No newline at end of file
diff --git a/source/access_handler/test/test_access_handler.py b/source/access_handler/test/test_access_handler.py
new file mode 100644
index 00000000..bba81692
--- /dev/null
+++ b/source/access_handler/test/test_access_handler.py
@@ -0,0 +1,55 @@
+##############################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). #
+# You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is #
+# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY #
+# KIND, express or implied. See the License for the specific language #
+# governing permissions and limitations under the License. #
+##############################################################################
+
+from access_handler.access_handler import *
+import os
+import logging
+
+log_level = 'DEBUG'
+logging.getLogger().setLevel(log_level)
+log = logging.getLogger('test_access_handler')
+
+
+def test_access_handler_error(ipset_env_var_setup, badbot_event, expected_exception_access_handler_error):
+ try:
+ lambda_handler(badbot_event, {})
+ except Exception as e:
+ expected = expected_exception_access_handler_error
+ assert str(e) == expected
+
+def test_initialize_usage_data():
+ os.environ['LOG_TYPE'] = 'LOG_TYPE'
+ result = initialize_usage_data()
+ expected = {
+ "data_type": "bad_bot",
+ "bad_bot_ip_set_size": 0,
+ "allowed_requests": 0,
+ "blocked_requests_all": 0,
+ "blocked_requests_bad_bot": 0,
+ "waf_type": 'LOG_TYPE',
+ "provisioner": "cfn"
+ }
+ assert result == expected
+
+def test_send_anonymous_usage_data(cloudwatch_client, expected_cw_resp):
+ result = send_anonymous_usage_data(
+ log=log,
+ scope='ALB',
+ ipset_name_v4='ipset_name_v4',
+ ipset_arn_v4='ipset_arn_v4',
+ ipset_name_v6='ipset_name_v6',
+ ipset_arn_v6='ipset_arn_v6'
+ )
+ assert result == expected_cw_resp
diff --git a/source/custom_resource/.coveragerc b/source/custom_resource/.coveragerc
new file mode 100644
index 00000000..3aa79036
--- /dev/null
+++ b/source/custom_resource/.coveragerc
@@ -0,0 +1,29 @@
+[run]
+omit =
+ test/*
+ */__init__.py
+ **/__init__.py
+ backoff/*
+ bin/*
+ boto3/*
+ botocore/*
+ certifi/*
+ charset*/*
+ crhelper*
+ chardet*
+ dateutil/*
+ idna/*
+ jmespath/*
+ lib/*
+ package*
+ python_*
+ requests/*
+ s3transfer/*
+ six*
+ tenacity*
+ tests
+ urllib3/*
+ yaml
+ PyYAML-*
+source =
+ .
\ No newline at end of file
diff --git a/source/custom_resource/__init__.py b/source/custom_resource/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/source/custom_resource/custom-resource.py b/source/custom_resource/custom-resource.py
deleted file mode 100644
index d3b8a396..00000000
--- a/source/custom_resource/custom-resource.py
+++ /dev/null
@@ -1,733 +0,0 @@
-######################################################################################################################
-# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. #
-# #
-# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
-# with the License. A copy of the License is located at #
-# #
-# http://www.apache.org/licenses/LICENSE-2.0 #
-# #
-# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
-# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
-# and limitations under the License. #
-######################################################################################################################
-
-import boto3
-import botocore
-import json
-import logging
-import datetime
-import requests
-import os
-import time
-from os import environ
-from botocore.config import Config
-from lib.waflibv2 import WAFLIBv2
-from lib.solution_metrics import send_metrics
-from lib.boto3_util import create_client, create_resource
-
-waflib = WAFLIBv2()
-
-logging.getLogger().debug('Loading function')
-
-
-# ======================================================================================================================
-# Configure Access Log Bucket
-# ======================================================================================================================
-# ----------------------------------------------------------------------------------------------------------------------
-# Create a bucket (if not exist) and configure an event to call Log Parser lambda funcion when new Access log file is
-# created (and stored on this S3 bucket).
-#
-# This function can raise exception if:
-# 01. A empty bucket name is used
-# 02. The bucket already exists and was created in a account that you cant access
-# 03. The bucket already exists and was created in a different region.
-# You can't trigger log parser lambda function from another region.
-#
-# All those requirements are pre-verified by helper function.
-# ----------------------------------------------------------------------------------------------------------------------
-def configure_s3_bucket(log, region, bucket_name, access_logging_bucket_name):
- log.info("[configure_s3_bucket] Start")
-
- if bucket_name.strip() == "":
- raise Exception('Failed to configure access log bucket. Name cannot be empty!')
-
- # ------------------------------------------------------------------------------------------------------------------
- # Create the S3 bucket (if not exist)
- # ------------------------------------------------------------------------------------------------------------------
- s3_client = create_client('s3')
-
- try:
- response = s3_client.head_bucket(Bucket=bucket_name)
- log.info("[configure_s3_bucket]response head_bucket: \n%s" % response)
-
- # Enable access logging if needed
- put_s3_bucket_access_logging(log, s3_client, bucket_name, access_logging_bucket_name)
- except botocore.exceptions.ClientError as e:
- # If a client error is thrown, then check that it was a 404 error.
- # If it was a 404 error, then the bucket does not exist.
- error_code = int(e.response['Error']['Code'])
- if error_code == 404:
- log.info("[configure_s3_bucket]: %s doesn't exist. Create bucket." % bucket_name)
- if region == 'us-east-1':
- s3_client.create_bucket(Bucket=bucket_name, ACL='private')
- else:
- s3_client.create_bucket(Bucket=bucket_name, ACL='private',
- CreateBucketConfiguration={'LocationConstraint': region})
-
- # Begin waiting for the S3 bucket, mybucket, to exist
- s3_bucket_exists_waiter = s3_client.get_waiter('bucket_exists')
- s3_bucket_exists_waiter.wait(Bucket=bucket_name)
-
- # Enable server side encryption on the S3 bucket
- response = s3_client.put_bucket_encryption(
- Bucket=bucket_name,
- ServerSideEncryptionConfiguration={
- 'Rules': [
- {
- 'ApplyServerSideEncryptionByDefault': {
- 'SSEAlgorithm': 'AES256'
- }
- },
- ]
- }
- )
- log.info("[configure_s3_bucket]response put_bucket_encryption: \n%s" % response)
-
- # block public access
- response = s3_client.put_public_access_block(
- Bucket=bucket_name,
- PublicAccessBlockConfiguration={
- 'BlockPublicAcls': True,
- 'IgnorePublicAcls': True,
- 'BlockPublicPolicy': True,
- 'RestrictPublicBuckets': True
- }
- )
- log.info("[configure_s3_bucket]response put_public_access_block: \n%s" % response)
-
- # Enable access logging
- put_s3_bucket_access_logging(log, s3_client, bucket_name, access_logging_bucket_name)
-
- log.info("[configure_s3_bucket] End")
-
-# ----------------------------------------------------------------------------------------------------------------------
-# Enable access logging on the App access log bucket
-# ----------------------------------------------------------------------------------------------------------------------
-def put_s3_bucket_access_logging(log, s3_client, bucket_name, access_logging_bucket_name):
- log.info("[put_s3_bucket_access_logging] Start")
-
- response = s3_client.get_bucket_logging(Bucket = bucket_name)
-
- # Enable access logging if not already exists
- if response.get('LoggingEnabled') is None:
- response = s3_client.put_bucket_logging(
- Bucket=bucket_name,
- BucketLoggingStatus={
- 'LoggingEnabled': {
- 'TargetBucket': access_logging_bucket_name,
- 'TargetPrefix': 'AppAccess_Logs/'
- }
- }
- )
- log.info("[put_s3_bucket_access_logging]put_bucket_logging response: \n%s" % response)
- log.info("[put_s3_bucket_access_logging] End")
-
-# ----------------------------------------------------------------------------------------------------------------------
-# Configure bucket event to call Log Parser whenever a new gz log or athena result file is added to the bucket;
-# call partition s3 log function whenever athena log parser is chosen and a log file is added to the bucket
-# ----------------------------------------------------------------------------------------------------------------------
-def add_s3_bucket_lambda_event(log, bucket_name, lambda_function_arn, lambda_log_partition_function_arn, lambda_parser,
- athena_parser):
- log.info("[add_s3_bucket_lambda_event] Start")
-
- try:
- s3_client = create_client('s3')
- if lambda_function_arn is not None and (lambda_parser or athena_parser):
- notification_conf = s3_client.get_bucket_notification_configuration(Bucket=bucket_name)
-
- log.info("[add_s3_bucket_lambda_event] notification_conf:\n %s"
- % (notification_conf))
-
- new_conf = {}
- new_conf['LambdaFunctionConfigurations'] = []
-
- if 'TopicConfigurations' in notification_conf:
- new_conf['TopicConfigurations'] = notification_conf['TopicConfigurations']
-
- if 'QueueConfigurations' in notification_conf:
- new_conf['QueueConfigurations'] = notification_conf['QueueConfigurations']
-
- if lambda_parser:
- new_conf['LambdaFunctionConfigurations'].append({
- 'Id': 'Call Log Parser',
- 'LambdaFunctionArn': lambda_function_arn,
- 'Events': ['s3:ObjectCreated:*'],
- 'Filter': {'Key': {'FilterRules': [{'Name': 'suffix', 'Value': 'gz'}]}}
- })
-
- if athena_parser:
- new_conf['LambdaFunctionConfigurations'].append({
- 'Id': 'Call Athena Result Parser',
- 'LambdaFunctionArn': lambda_function_arn,
- 'Events': ['s3:ObjectCreated:*'],
- 'Filter': {'Key': {'FilterRules': [{'Name': 'prefix', 'Value': 'athena_results/'},
- {'Name': 'suffix', 'Value': 'csv'}]}}
- })
-
- if lambda_log_partition_function_arn is not None:
- new_conf['LambdaFunctionConfigurations'].append({
- 'Id': 'Call s3 log partition function',
- 'LambdaFunctionArn': lambda_log_partition_function_arn,
- 'Events': ['s3:ObjectCreated:*'],
- 'Filter': {'Key': {
- 'FilterRules': [{'Name': 'prefix', 'Value': 'AWSLogs/'}, {'Name': 'suffix', 'Value': 'gz'}]}}
- })
-
- log.info("[add_s3_bucket_lambda_event] LambdaFunctionConfigurations:\n %s"
- % (new_conf['LambdaFunctionConfigurations']))
-
- s3_client.put_bucket_notification_configuration(Bucket=bucket_name, NotificationConfiguration=new_conf)
- except Exception as error:
- log.error(error)
-
- log.info("[add_s3_bucket_lambda_event] End")
-
-
-# ----------------------------------------------------------------------------------------------------------------------
-# Clean access log bucket event
-# ----------------------------------------------------------------------------------------------------------------------
-def remove_s3_bucket_lambda_event(log, bucket_name, lambda_function_arn, lambda_log_partition_function_arn):
- if lambda_function_arn != None:
- log.info("[remove_s3_bucket_lambda_event] Start")
-
- s3_client = create_client('s3')
- try:
- new_conf = {}
- notification_conf = s3_client.get_bucket_notification_configuration(Bucket=bucket_name)
-
- log.info("[remove_s3_bucket_lambda_event]notification_conf:\n %s"
- % (notification_conf))
-
- if 'TopicConfigurations' in notification_conf:
- new_conf['TopicConfigurations'] = notification_conf['TopicConfigurations']
- if 'QueueConfigurations' in notification_conf:
- new_conf['QueueConfigurations'] = notification_conf['QueueConfigurations']
-
- if 'LambdaFunctionConfigurations' in notification_conf:
- new_conf['LambdaFunctionConfigurations'] = []
- for lfc in notification_conf['LambdaFunctionConfigurations']:
- if lfc['LambdaFunctionArn'] == lambda_function_arn or \
- lfc['LambdaFunctionArn'] == lambda_log_partition_function_arn:
- log.info("[remove_s3_bucket_lambda_event]%s match found, continue." %lfc['LambdaFunctionArn'])
- continue # remove all references
- else:
- new_conf['LambdaFunctionConfigurations'].append(lfc)
- log.info("[remove_s3_bucket_lambda_event]lfc appended: %s" %lfc)
-
- log.info("[remove_s3_bucket_lambda_event]new_conf:\n %s"
- % (new_conf))
-
- s3_client.put_bucket_notification_configuration(Bucket=bucket_name, NotificationConfiguration=new_conf)
-
- except Exception as error:
- log.error(
- "Failed to remove S3 Bucket lambda event. Check if the bucket still exists, you own it and has proper access policy.")
- log.error(str(error))
-
- log.info("[remove_s3_bucket_lambda_event] End")
-
-
-#======================================================================================================================
-# Configure Web ACl
-#======================================================================================================================
-def delete_ip_set(log, scope, ip_set_name, ip_set_id):
- try:
- log.info("[delete_ip_set] Start deleting IP set: name - %s, id - %s"%(ip_set_name, ip_set_id))
-
- response = waflib.delete_ip_set(log, scope, ip_set_name, ip_set_id)
- if response is None:
- log.info("[delete_ip_set] IP set has already been deleted: name - %s, id - %s"%(ip_set_name, ip_set_id))
- return None
-
- log.info(response)
- log.info("[delete_ip_set] End deleting IP set: name - %s, id - %s"%(ip_set_name, ip_set_id))
-
- # sleep for a few seconds at the end of each call to avoid API throttling
- time.sleep(8)
- except Exception as error:
- log.info("[delete_ip_set] Failed to delete IP set: name - %s, id - %s"%(ip_set_name, ip_set_id))
- log.error(str(error))
-
-
-# ======================================================================================================================
-# Configure AWS WAF Logs
-# ======================================================================================================================
-def put_logging_configuration(log, web_acl_arn, delivery_stream_arn):
- log.debug("[waflib:put_logging_configuration] Start")
-
- waflib.put_logging_configuration(log, web_acl_arn, delivery_stream_arn)
-
- log.debug("[waflib:put_logging_configuration] End")
-
-
-def delete_logging_configuration(log, web_acl_arn):
- log.debug("[waflib:delete_logging_configuration] Start")
-
- waflib.delete_logging_configuration(log, web_acl_arn)
-
- log.debug("[waflib:delete_logging_configuration] End")
-
-
-# ======================================================================================================================
-# Generate Log Parser Config File
-# ======================================================================================================================
-def generate_app_log_parser_conf_file(log, stack_name, error_threshold, block_period, app_access_log_bucket, overwrite):
- log.debug("[generate_app_log_parser_conf_file] Start")
-
- local_file = '/tmp/' + stack_name + '-app_log_conf_LOCAL.json'
- remote_file = stack_name + '-app_log_conf.json'
- default_conf = {
- 'general': {
- 'errorThreshold': error_threshold,
- 'blockPeriod': block_period,
- 'errorCodes': ['400', '401', '403', '404', '405']
- },
- 'uriList': {
- }
- }
-
- if not overwrite:
- try:
- s3_resource = create_resource('s3')
- file_obj = s3_resource.Object(app_access_log_bucket, remote_file)
- file_content = file_obj.get()['Body'].read()
- remote_conf = json.loads(file_content)
-
- if 'general' in remote_conf and 'errorCodes' in remote_conf['general']:
- default_conf['general']['errorCodes'] = remote_conf['general']['errorCodes']
-
- if 'uriList' in remote_conf:
- default_conf['uriList'] = remote_conf['uriList']
-
- except Exception as e:
- log.debug("[generate_app_log_parser_conf_file] \tFailed to merge existing conf file data.")
- log.debug(e)
-
- with open(local_file, 'w') as outfile:
- json.dump(default_conf, outfile)
-
- s3_client = create_client('s3')
- s3_client.upload_file(local_file, app_access_log_bucket, remote_file, ExtraArgs={'ContentType': "application/json"})
-
- log.debug("[generate_app_log_parser_conf_file] End")
-
-
-def generate_waf_log_parser_conf_file(log, stack_name, request_threshold, block_period, waf_access_log_bucket,
- overwrite):
- log.debug("[generate_waf_log_parser_conf_file] Start")
-
- local_file = '/tmp/' + stack_name + '-waf_log_conf_LOCAL.json'
- remote_file = stack_name + '-waf_log_conf.json'
- default_conf = {
- 'general': {
- 'requestThreshold': request_threshold,
- 'blockPeriod': block_period,
- 'ignoredSufixes': []
- },
- 'uriList': {
- }
- }
-
- if not overwrite:
- try:
- s3_resource = create_resource('s3')
- file_obj = s3_resource.Object(waf_access_log_bucket, remote_file)
- file_content = file_obj.get()['Body'].read()
- remote_conf = json.loads(file_content)
-
- if 'general' in remote_conf and 'ignoredSufixes' in remote_conf['general']:
- default_conf['general']['ignoredSufixes'] = remote_conf['general']['ignoredSufixes']
-
- if 'uriList' in remote_conf:
- default_conf['uriList'] = remote_conf['uriList']
-
- except Exception as e:
- log.debug("[generate_waf_log_parser_conf_file] \tFailed to merge existing conf file data.")
- log.debug(e)
-
- with open(local_file, 'w') as outfile:
- json.dump(default_conf, outfile)
-
- s3_client = create_client('s3')
- s3_client.upload_file(local_file, waf_access_log_bucket, remote_file, ExtraArgs={'ContentType': "application/json"})
-
- log.debug("[generate_waf_log_parser_conf_file] End")
-
-
-# ======================================================================================================================
-# Add Athena Partitions
-# ======================================================================================================================
-def add_athena_partitions(log, add_athena_partition_lambda_function, resource_type,
- glue_database, access_log_bucket, glue_access_log_table,
- glue_waf_log_table, waf_log_bucket, athena_work_group):
- log.info("[add_athena_partitions] Start")
-
- lambda_client = create_client('lambda')
- response = lambda_client.invoke(
- FunctionName=add_athena_partition_lambda_function.rsplit(":", 1)[-1],
- Payload="""{
- "resourceType":"%s",
- "glueAccessLogsDatabase":"%s",
- "accessLogBucket":"%s",
- "glueAppAccessLogsTable":"%s",
- "glueWafAccessLogsTable":"%s",
- "wafLogBucket":"%s",
- "athenaWorkGroup":"%s"
- }""" % (resource_type, glue_database, access_log_bucket,
- glue_access_log_table, glue_waf_log_table,
- waf_log_bucket, athena_work_group)
- )
- log.info("[add_athena_partitions] Lambda invocation response:\n%s" % response)
- log.info("[add_athena_partitions] End")
-
-
-# ======================================================================================================================
-# Auxiliary Functions
-# ======================================================================================================================
-def send_response(log, event, context, responseStatus, responseData, resourceId, reason=None):
- log.debug("[send_response] Start")
-
- responseUrl = event['ResponseURL']
- cw_logs_url = "https://console.aws.amazon.com/cloudwatch/home?region=%s#logEventViewer:group=%s;stream=%s" % (
- context.invoked_function_arn.split(':')[3], context.log_group_name, context.log_stream_name)
-
- log.info(responseUrl)
- responseBody = {}
- responseBody['Status'] = responseStatus
- responseBody['Reason'] = reason or ('See the details in CloudWatch Logs: ' + cw_logs_url)
- responseBody['PhysicalResourceId'] = resourceId
- responseBody['StackId'] = event['StackId']
- responseBody['RequestId'] = event['RequestId']
- responseBody['LogicalResourceId'] = event['LogicalResourceId']
- responseBody['NoEcho'] = False
- responseBody['Data'] = responseData
-
- json_responseBody = json.dumps(responseBody)
- log.debug("Response body:\n" + json_responseBody)
-
- headers = {
- 'content-type': '',
- 'content-length': str(len(json_responseBody))
- }
-
- try:
- response = requests.put(responseUrl,
- data=json_responseBody,
- headers=headers,
- timeout=600)
- log.debug("Status code: " + response.reason)
-
- except Exception as error:
- log.error("[send_response] Failed executing requests.put(..)")
- log.error(str(error))
-
- log.debug("[send_response] End")
-
-
-def send_anonymous_usage_data(log, action_type, resource_properties):
- try:
- if 'SendAnonymousUsageData' not in resource_properties or resource_properties[
- 'SendAnonymousUsageData'].lower() != 'yes':
- return
- log.info("[send_anonymous_usage_data] Start")
-
- usage_data = {
- "version": resource_properties['Version'],
- "data_type": "custom_resource",
- "region": resource_properties['Region'],
- "action": action_type,
- "sql_injection_protection": resource_properties['ActivateSqlInjectionProtectionParam'],
- "xss_scripting_protection": resource_properties['ActivateCrossSiteScriptingProtectionParam'],
- "http_flood_protection": resource_properties['ActivateHttpFloodProtectionParam'],
- "scanners_probes_protection": resource_properties['ActivateScannersProbesProtectionParam'],
- "reputation_lists_protection": resource_properties['ActivateReputationListsProtectionParam'],
- "bad_bot_protection": resource_properties['ActivateBadBotProtectionParam'],
- "request_threshold": resource_properties['RequestThreshold'],
- "error_threshold": resource_properties['ErrorThreshold'],
- "waf_block_period": resource_properties['WAFBlockPeriod'],
- "aws_managed_rules": resource_properties['ActivateAWSManagedRulesParam'],
- "keep_original_s3_data": resource_properties['KeepDataInOriginalS3Location'],
- "allowed_ip_retention_period_minute": resource_properties['IPRetentionPeriodAllowedParam'],
- "denied_ip_retention_period_minute": resource_properties['IPRetentionPeriodDeniedParam'],
- "sns_email_notification": resource_properties['SNSEmailParam']
- }
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[send_anonymous_usage_data] Send Data")
- # --------------------------------------------------------------------------------------------------------------
- response = send_metrics(data=usage_data)
- response_code = response.status_code
- log.info('[send_anonymous_usage_data] Response Code: {}'.format(response_code))
- log.info("[send_anonymous_usage_data] End")
-
- except Exception as error:
- log.debug("[send_anonymous_usage_data] Failed to Send Data")
- log.debug(str(error))
-
-
-# ======================================================================================================================
-# Lambda Entry Point
-# ======================================================================================================================
-def lambda_handler(event, context):
- log = logging.getLogger()
- responseStatus = 'SUCCESS'
- reason = None
- responseData = {}
- resourceId = event['PhysicalResourceId'] if 'PhysicalResourceId' in event else event['LogicalResourceId']
- result = {
- 'StatusCode': '200',
- 'Body': {'message': 'success'}
- }
-
- try:
- # ------------------------------------------------------------------
- # Set Log Level
- # ------------------------------------------------------------------
- log_level = str(os.getenv('LOG_LEVEL').upper())
- if log_level not in ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']:
- log_level = 'ERROR'
- log.setLevel(log_level)
-
- # ----------------------------------------------------------
- # Read inputs parameters
- # ----------------------------------------------------------
- log.info(event)
- request_type = event['RequestType'].upper() if ('RequestType' in event) else ""
- log.info(request_type)
-
- # ----------------------------------------------------------
- # Process event
- # ----------------------------------------------------------
- if event['ResourceType'] == "Custom::ConfigureAppAccessLogBucket":
- lambda_log_parser_function = event['ResourceProperties']['LogParser'] if 'LogParser' in event[
- 'ResourceProperties'] else None
- lambda_partition_s3_logs_function = event['ResourceProperties'][
- 'MoveS3LogsForPartition'] if 'MoveS3LogsForPartition' in event['ResourceProperties'] else None
- lambda_parser = True if event['ResourceProperties']['ScannersProbesLambdaLogParser'] == 'yes' else False
- athena_parser = True if event['ResourceProperties']['ScannersProbesAthenaLogParser'] == 'yes' else False
-
- if 'CREATE' in request_type:
- configure_s3_bucket(log, event['ResourceProperties']['Region'],
- event['ResourceProperties']['AppAccessLogBucket'],
- event['ResourceProperties']['AccessLoggingBucket'])
- add_s3_bucket_lambda_event(log, event['ResourceProperties']['AppAccessLogBucket'],
- lambda_log_parser_function,
- lambda_partition_s3_logs_function,
- lambda_parser,
- athena_parser)
-
- elif 'UPDATE' in request_type:
- configure_s3_bucket(log, event['ResourceProperties']['Region'],
- event['ResourceProperties']['AppAccessLogBucket'],
- event['ResourceProperties']['AccessLoggingBucket'])
- old_lambda_app_log_parser_function = event['OldResourceProperties']['LogParser'] if 'LogParser' in \
- event[
- 'OldResourceProperties'] else None
- old_lambda_partition_s3_logs_function = event['OldResourceProperties']['MoveS3LogsForPartition'] \
- if 'MoveS3LogsForPartition' in event['OldResourceProperties'] else None
- old_lambda_parser = True if event['OldResourceProperties'][
- 'ScannersProbesLambdaLogParser'] == 'yes' else False
- old_athena_parser = True if event['OldResourceProperties'][
- 'ScannersProbesAthenaLogParser'] == 'yes' else False
-
- if (event['OldResourceProperties']['AppAccessLogBucket'] != event['ResourceProperties'][
- 'AppAccessLogBucket'] or
- old_lambda_app_log_parser_function != lambda_log_parser_function or
- old_lambda_partition_s3_logs_function != lambda_partition_s3_logs_function or
- old_lambda_parser != lambda_parser or
- old_athena_parser != athena_parser):
-
- remove_s3_bucket_lambda_event(log, event['OldResourceProperties']["AppAccessLogBucket"],
- old_lambda_app_log_parser_function,
- old_lambda_partition_s3_logs_function)
- add_s3_bucket_lambda_event(log, event['ResourceProperties']['AppAccessLogBucket'],
- lambda_log_parser_function,
- lambda_partition_s3_logs_function,
- lambda_parser,
- athena_parser)
-
- elif 'DELETE' in request_type:
- remove_s3_bucket_lambda_event(log, event['ResourceProperties']["AppAccessLogBucket"],
- lambda_log_parser_function, lambda_partition_s3_logs_function)
- elif event['ResourceType'] == "Custom::ConfigureWafLogBucket":
- lambda_log_parser_function = event['ResourceProperties']['LogParser'] if 'LogParser' in event[
- 'ResourceProperties'] else None
- lambda_partition_s3_logs_function = None
- lambda_parser = True if event['ResourceProperties']['HttpFloodLambdaLogParser'] == 'yes' else False
- athena_parser = True if event['ResourceProperties']['HttpFloodAthenaLogParser'] == 'yes' else False
-
- if 'CREATE' in request_type:
- add_s3_bucket_lambda_event(log, event['ResourceProperties']['WafLogBucket'],
- lambda_log_parser_function,
- lambda_partition_s3_logs_function,
- lambda_parser,
- athena_parser)
-
- elif 'UPDATE' in request_type:
- old_lambda_app_log_parser_function = event['OldResourceProperties']['LogParser'] if 'LogParser' in \
- event[
- 'OldResourceProperties'] else None
- old_lambda_parser = True if event['OldResourceProperties'][
- 'HttpFloodLambdaLogParser'] == 'yes' else False
- old_athena_parser = True if event['OldResourceProperties'][
- 'HttpFloodAthenaLogParser'] == 'yes' else False
-
- if (event['OldResourceProperties']['WafLogBucket'] != event['ResourceProperties']['WafLogBucket'] or
- old_lambda_app_log_parser_function != lambda_log_parser_function or
- old_lambda_parser != lambda_parser or
- old_athena_parser != athena_parser):
- remove_s3_bucket_lambda_event(log, event['OldResourceProperties']["WafLogBucket"],
- old_lambda_app_log_parser_function,
- lambda_partition_s3_logs_function)
- add_s3_bucket_lambda_event(log, event['ResourceProperties']['WafLogBucket'],
- lambda_log_parser_function,
- lambda_partition_s3_logs_function,
- lambda_parser,
- athena_parser)
-
- elif 'DELETE' in request_type:
- remove_s3_bucket_lambda_event(log, event['ResourceProperties']["WafLogBucket"],
- lambda_log_parser_function,
- lambda_partition_s3_logs_function)
-
- elif event['ResourceType'] == "Custom::ConfigureWebAcl":
- # Manually delete ip sets to avoid throttling occurred during stack deletion due to API call limit
- if 'DELETE' in request_type:
- scope = os.getenv('SCOPE')
- if 'WAFWhitelistSetIPV4' in event['ResourceProperties']:
- delete_ip_set(log, scope,
- event['ResourceProperties']['WAFWhitelistSetIPV4Name'],
- event['ResourceProperties']['WAFWhitelistSetIPV4'])
- if 'WAFBlacklistSetIPV4' in event['ResourceProperties']:
- delete_ip_set(log, scope,
- event['ResourceProperties']['WAFBlacklistSetIPV4Name'],
- event['ResourceProperties']['WAFBlacklistSetIPV4'])
- if 'WAFHttpFloodSetIPV4' in event['ResourceProperties']:
- delete_ip_set(log, scope,
- event['ResourceProperties']['WAFHttpFloodSetIPV4Name'],
- event['ResourceProperties']['WAFHttpFloodSetIPV4'])
- if 'WAFScannersProbesSetIPV4' in event['ResourceProperties']:
- delete_ip_set(log, scope,
- event['ResourceProperties']['WAFScannersProbesSetIPV4Name'],
- event['ResourceProperties']['WAFScannersProbesSetIPV4'])
- if 'WAFReputationListsSetIPV4' in event['ResourceProperties']:
- delete_ip_set(log, scope,
- event['ResourceProperties']['WAFReputationListsSetIPV4Name'],
- event['ResourceProperties']['WAFReputationListsSetIPV4'])
- if 'WAFBadBotSetIPV4' in event['ResourceProperties']:
- delete_ip_set(log, scope,
- event['ResourceProperties']['WAFBadBotSetIPV4Name'],
- event['ResourceProperties']['WAFBadBotSetIPV4'])
- if 'WAFWhitelistSetIPV6' in event['ResourceProperties']:
- delete_ip_set(log, scope,
- event['ResourceProperties']['WAFWhitelistSetIPV6Name'],
- event['ResourceProperties']['WAFWhitelistSetIPV6'])
- if 'WAFBlacklistSetIPV6' in event['ResourceProperties']:
- delete_ip_set(log, scope,
- event['ResourceProperties']['WAFBlacklistSetIPV6Name'],
- event['ResourceProperties']['WAFBlacklistSetIPV6'])
- if 'WAFHttpFloodSetIPV6' in event['ResourceProperties']:
- delete_ip_set(log, scope,
- event['ResourceProperties']['WAFHttpFloodSetIPV6Name'],
- event['ResourceProperties']['WAFHttpFloodSetIPV6'])
- if 'WAFScannersProbesSetIPV6' in event['ResourceProperties']:
- delete_ip_set(log, scope,
- event['ResourceProperties']['WAFScannersProbesSetIPV6Name'],
- event['ResourceProperties']['WAFScannersProbesSetIPV6'])
- if 'WAFReputationListsSetIPV6' in event['ResourceProperties']:
- delete_ip_set(log, scope,
- event['ResourceProperties']['WAFReputationListsSetIPV6Name'],
- event['ResourceProperties']['WAFReputationListsSetIPV6'])
- if 'WAFBadBotSetIPV6' in event['ResourceProperties']:
- delete_ip_set(log, scope,
- event['ResourceProperties']['WAFBadBotSetIPV6Name'],
- event['ResourceProperties']['WAFBadBotSetIPV6'])
-
- send_anonymous_usage_data(log, event['RequestType'], event['ResourceProperties'])
-
- elif event['ResourceType'] == "Custom::ConfigureAWSWAFLogs":
- if 'CREATE' in request_type:
- put_logging_configuration(log, event['ResourceProperties']['WAFWebACLArn'],
- event['ResourceProperties']['DeliveryStreamArn'])
-
- elif 'UPDATE' in request_type:
- delete_logging_configuration(log, event['OldResourceProperties']['WAFWebACLArn'])
- put_logging_configuration(log, event['ResourceProperties']['WAFWebACLArn'],
- event['ResourceProperties']['DeliveryStreamArn'])
-
- elif 'DELETE' in request_type:
- delete_logging_configuration(log, event['ResourceProperties']['WAFWebACLArn'])
-
- elif event['ResourceType'] == "Custom::GenerateAppLogParserConfFile":
- stack_name = event['ResourceProperties']['StackName']
- error_threshold = int(event['ResourceProperties']['ErrorThreshold'])
- block_period = int(event['ResourceProperties']['WAFBlockPeriod'])
- app_access_log_bucket = event['ResourceProperties']['AppAccessLogBucket']
-
- if 'CREATE' in request_type:
- generate_app_log_parser_conf_file(log, stack_name, error_threshold, block_period, app_access_log_bucket,
- True)
- elif 'UPDATE' in request_type:
- generate_app_log_parser_conf_file(log, stack_name, error_threshold, block_period, app_access_log_bucket,
- False)
-
- # DELETE: do nothing
-
- elif event['ResourceType'] == "Custom::GenerateWafLogParserConfFile":
- stack_name = event['ResourceProperties']['StackName']
- request_threshold = int(event['ResourceProperties']['RequestThreshold'])
- block_period = int(event['ResourceProperties']['WAFBlockPeriod'])
- waf_access_log_bucket = event['ResourceProperties']['WafAccessLogBucket']
-
- if 'CREATE' in request_type:
- generate_waf_log_parser_conf_file(log, stack_name, request_threshold, block_period,
- waf_access_log_bucket,
- True)
- elif 'UPDATE' in request_type:
- generate_waf_log_parser_conf_file(log, stack_name, request_threshold, block_period,
- waf_access_log_bucket,
- False)
- # DELETE: do nothing
-
- elif event['ResourceType'] == "Custom::AddAthenaPartitions":
- if 'CREATE' in request_type or 'UPDATE' in request_type:
- add_athena_partitions(
- log,
- event['ResourceProperties']['AddAthenaPartitionsLambda'],
- event['ResourceProperties']['ResourceType'],
- event['ResourceProperties']['GlueAccessLogsDatabase'],
- event['ResourceProperties']['AppAccessLogBucket'],
- event['ResourceProperties']['GlueAppAccessLogsTable'],
- event['ResourceProperties']['GlueWafAccessLogsTable'],
- event['ResourceProperties']['WafLogBucket'],
- event['ResourceProperties']['AthenaWorkGroup'])
-
- # DELETE: do nothing
-
- except Exception as error:
- log.error(error)
- responseStatus = 'FAILED'
- reason = str(error)
- result = {
- 'statusCode': '500',
- 'body': {'message': reason}
- }
-
- finally:
- # ------------------------------------------------------------------
- # Send Result
- # ------------------------------------------------------------------
- if 'ResponseURL' in event:
- send_response(log, event, context, responseStatus, responseData, resourceId, reason)
-
- return json.dumps(result)
diff --git a/source/custom_resource/custom_resource.py b/source/custom_resource/custom_resource.py
new file mode 100644
index 00000000..2d05dcd0
--- /dev/null
+++ b/source/custom_resource/custom_resource.py
@@ -0,0 +1,147 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
+import json
+from resource_manager import ResourceManager
+from log_group_retention import LogGroupRetention
+from lib.cfn_response import send_response
+from lib.logging_util import set_log_level
+
+# ======================================================================================================================
+# Lambda Entry Point
+# ======================================================================================================================
+def lambda_handler(event, context):
+
+ log = set_log_level()
+ response_status = 'SUCCESS'
+ reason = None
+ response_data = {}
+ resource_id = event.get('PhysicalResourceId', event['LogicalResourceId'])
+ result = {
+ 'StatusCode': '200',
+ 'Body': {'message': 'success'}
+ }
+ resource_manager = ResourceManager(log=log)
+
+ log.info(f'context: {context}')
+
+ try:
+ # ----------------------------------------------------------
+ # Read inputs parameters
+ # ----------------------------------------------------------
+ log.info(event)
+ request_type = event.get('RequestType', "").upper()
+ log.info(request_type)
+
+ # ----------------------------------------------------------
+ # Process event
+ # ----------------------------------------------------------
+
+ if event['ResourceType'] == "Custom::SetCloudWatchLogGroupRetention" and request_type in {'UPDATE', 'CREATE'}:
+ log_group_retention = LogGroupRetention(log)
+ log_group_retention.update_retention(
+ event=event
+ )
+
+ if event['ResourceType'] == "Custom::ConfigureAppAccessLogBucket":
+ if 'CREATE' in request_type:
+ resource_manager.configure_s3_bucket(event)
+ app_access_params = resource_manager.get_params_app_access_create_event(event)
+ resource_manager.add_s3_bucket_lambda_event(**app_access_params)
+
+ elif 'UPDATE' in request_type:
+ resource_manager.configure_s3_bucket(event)
+ if resource_manager.contains_old_app_access_resources(event):
+ resource_manager.update_app_access_log_bucket(event)
+
+ elif 'DELETE' in request_type:
+ bucket_lambda_params = resource_manager.get_params_app_access_delete_event(event)
+ resource_manager.remove_s3_bucket_lambda_event(**bucket_lambda_params)
+
+
+ elif event['ResourceType'] == "Custom::ConfigureWafLogBucket":
+ if 'CREATE' in request_type:
+ waf_params = resource_manager.get_params_waf_event(event)
+ resource_manager.add_s3_bucket_lambda_event(**waf_params)
+
+ elif 'UPDATE' in request_type:
+ if resource_manager.waf_has_old_resources(event):
+ resource_manager.update_waf_log_bucket(event)
+
+ elif 'DELETE' in request_type:
+ bucket_lambda_params = resource_manager.get_params_bucket_lambda_delete_event(event)
+ resource_manager.remove_s3_bucket_lambda_event(**bucket_lambda_params)
+
+
+ elif event['ResourceType'] == "Custom::ConfigureWebAcl":
+ # Manually delete ip sets to avoid throttling occurred during stack deletion due to API call limit
+ if 'DELETE' in request_type:
+ resource_manager.delete_ip_sets(event)
+ resource_manager.send_anonymous_usage_data(event['RequestType'], event.get('ResourceProperties', {}))
+
+
+ elif event['ResourceType'] == "Custom::ConfigureAWSWAFLogs":
+ if 'CREATE' in request_type:
+ resource_manager.put_logging_configuration(event)
+
+ elif 'UPDATE' in request_type:
+ resource_manager.delete_logging_configuration(event)
+ resource_manager.put_logging_configuration(event)
+
+ elif 'DELETE' in request_type:
+ resource_manager.delete_logging_configuration(event)
+
+
+ elif event['ResourceType'] == "Custom::GenerateAppLogParserConfFile":
+ if 'CREATE' in request_type:
+ resource_manager.generate_app_log_parser_conf_file(event, overwrite=True)
+
+ elif 'UPDATE' in request_type:
+ resource_manager.generate_app_log_parser_conf_file(event, overwrite=False)
+
+ # DELETE: do nothing
+
+
+ elif event['ResourceType'] == "Custom::GenerateWafLogParserConfFile":
+ if 'CREATE' in request_type:
+ resource_manager.generate_waf_log_parser_conf_file(event, overwrite=True)
+
+ elif 'UPDATE' in request_type:
+ resource_manager.generate_waf_log_parser_conf_file(event, overwrite=False)
+
+ # DELETE: do nothing
+
+
+ elif event['ResourceType'] == "Custom::AddAthenaPartitions":
+ if 'CREATE' in request_type or 'UPDATE' in request_type:
+ resource_manager.add_athena_partitions(event)
+
+ # DELETE: do nothing
+
+ except Exception as error:
+ log.error(error)
+ response_status = 'FAILED'
+ reason = str(error)
+ result = {
+ 'statusCode': '500',
+ 'body': {'message': reason}
+ }
+
+ finally:
+ # ------------------------------------------------------------------
+ # Send Result
+ # ------------------------------------------------------------------
+ if 'ResponseURL' in event:
+ send_response(log, event, context, response_status, response_data, resource_id, reason)
+
+ return json.dumps(result)
diff --git a/source/custom_resource/log_group_retention.py b/source/custom_resource/log_group_retention.py
new file mode 100644
index 00000000..53f5e94c
--- /dev/null
+++ b/source/custom_resource/log_group_retention.py
@@ -0,0 +1,86 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
+from lib.boto3_util import create_client
+
+TRUNC_STACK_NAME_MAX_LEN = 20
+
+class LogGroupRetention:
+ def __init__(self, log):
+ self.log = log
+
+ def update_retention(self, event):
+ cloudwatch = create_client('logs')
+
+ log_group_prefix = self.get_log_group_prefix(
+ stack_name=event['ResourceProperties']['StackName']
+ )
+
+ log_groups = cloudwatch.describe_log_groups(
+ logGroupNamePrefix=log_group_prefix
+ )
+
+ lambda_names = self.get_lambda_names(
+ resource_props=event['ResourceProperties']
+ )
+
+ self.set_log_group_retention(
+ client=cloudwatch,
+ log_groups=log_groups,
+ lambda_names=lambda_names,
+ retention_period=int(event['ResourceProperties']['LogGroupRetention'])
+ )
+
+
+ def get_lambda_names(self, resource_props):
+ lambdas = [
+ 'CustomResourceLambdaName',
+ 'MoveS3LogsForPartitionLambdaName',
+ 'AddAthenaPartitionsLambdaName',
+ 'SetIPRetentionLambdaName',
+ 'RemoveExpiredIPLambdaName',
+ 'ReputationListsParserLambdaName',
+ 'BadBotParserLambdaName',
+ 'HelperLambdaName',
+ 'LogParserLambdaName',
+ 'CustomTimerLambdaName'
+ ]
+ lambda_names = set()
+ for lam in lambdas:
+ lambda_name = resource_props.get(lam,'')
+ if lambda_name:
+ lambda_names.add(f'/aws/lambda/{lambda_name}')
+ return lambda_names
+
+
+ def truncate_stack_name(self, stack_name):
+ # in case StackName is too long (up to 128 chars),
+ # lambda function name (up to 64 chars) will use a truncated StackName
+ if len(stack_name) < TRUNC_STACK_NAME_MAX_LEN:
+ return stack_name
+ return stack_name[0:TRUNC_STACK_NAME_MAX_LEN]
+
+
+ def get_log_group_prefix(self, stack_name):
+ truncated_stack_name = self.truncate_stack_name(stack_name)
+ return f'/aws/lambda/{truncated_stack_name}'
+
+
+ def set_log_group_retention(self, client, log_groups, lambda_names, retention_period):
+ for log_group in log_groups['logGroups']:
+ if log_group['logGroupName'] in lambda_names:
+ client.put_retention_policy(
+ logGroupName = log_group['logGroupName'],
+ retentionInDays = int(retention_period)
+ )
+ self.log.info(f'put retention for log group {log_group["logGroupName"]}')
\ No newline at end of file
diff --git a/source/custom_resource/requirements.txt b/source/custom_resource/requirements.txt
index 511213cc..635b9d03 100644
--- a/source/custom_resource/requirements.txt
+++ b/source/custom_resource/requirements.txt
@@ -1,2 +1,2 @@
-requests>=2.28.2
-backoff>=2.2.1
\ No newline at end of file
+requests~=2.28.2
+backoff~=2.2.1
\ No newline at end of file
diff --git a/source/custom_resource/requirements_dev.txt b/source/custom_resource/requirements_dev.txt
new file mode 100644
index 00000000..1f9e6301
--- /dev/null
+++ b/source/custom_resource/requirements_dev.txt
@@ -0,0 +1,10 @@
+botocore~=1.29.85
+boto3~=1.26.85
+mock~=5.0.1
+moto~=4.1.4
+pytest~=7.2.2
+pytest-mock~=3.10.0
+pytest-runner~=6.0.0
+freezegun~=1.2.2
+pytest-cov~=4.0.0
+pytest-env~=0.8.1
\ No newline at end of file
diff --git a/source/custom_resource/resource_manager.py b/source/custom_resource/resource_manager.py
new file mode 100644
index 00000000..010becc3
--- /dev/null
+++ b/source/custom_resource/resource_manager.py
@@ -0,0 +1,662 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
+import json
+import botocore
+import os
+from logging import Logger
+from lib.waflibv2 import WAFLIBv2
+from lib.boto3_util import create_client
+from lib.s3_util import S3
+from lib.solution_metrics import send_metrics
+
+AWS_LOGS_PATH_PREFIX = 'AWSLogs/'
+S3_OBJECT_CREATED = 's3:ObjectCreated:*'
+EMPTY_BUCKET_NAME_EXCEPTION = Exception('Failed to configure access log bucket. Name cannot be empty!')
+
+
+class ResourceManager:
+ def __init__(self, log: Logger):
+ self.log = log
+ self.waflib = WAFLIBv2()
+ self.s3 = S3(log)
+
+ def update_waf_log_bucket(self, event: dict) -> None:
+ bucket_lambda_params = self.get_params_bucket_lambda_update_event(event)
+ waf_params = self.get_params_waf_event(event)
+ self.remove_s3_bucket_lambda_event(**bucket_lambda_params)
+ self.add_s3_bucket_lambda_event(**waf_params)
+
+ def update_app_access_log_bucket(self, event: dict) -> None:
+ bucket_lambda_params = self.get_params_app_access_update_event(event)
+ app_access_params = self.get_params_app_access_update(event)
+ self.remove_s3_bucket_lambda_event(**bucket_lambda_params)
+ self.add_s3_bucket_lambda_event(**app_access_params)
+
+ def get_params_waf_event(self, event: dict) -> dict:
+ params = {}
+ resource_props = event.get('ResourceProperties', {})
+ params['bucket_name'] = resource_props['WafLogBucket']
+ params['lambda_function_arn'] = resource_props.get('LogParser', None)
+ params['lambda_log_partition_function_arn'] = None
+ params['lambda_parser'] = resource_props['HttpFloodLambdaLogParser'] == 'yes'
+ params['athena_parser'] = resource_props['HttpFloodAthenaLogParser'] == 'yes'
+ params['bucket_prefix'] = AWS_LOGS_PATH_PREFIX
+ return params
+
+ def get_params_app_access_update(self, event: dict) -> dict:
+ params = {}
+ resource_props = event.get('ResourceProperties', {})
+ params['bucket_name'] = resource_props['AppAccessLogBucket']
+ params['lambda_function_arn'] = resource_props.get('LogParser', None)
+ params['lambda_log_partition_function_arn'] = resource_props.get('MoveS3LogsForPartition', None)
+ params['lambda_parser'] = resource_props['ScannersProbesLambdaLogParser'] == 'yes'
+ params['athena_parser'] = resource_props['ScannersProbesAthenaLogParser'] == 'yes'
+ if resource_props['AppAccessLogBucketPrefix'] != AWS_LOGS_PATH_PREFIX:
+ params['bucket_prefix'] = resource_props['AppAccessLogBucketPrefix']
+ else:
+ params['bucket_prefix'] = AWS_LOGS_PATH_PREFIX
+ return params
+
+ def get_params_app_access_create_event(self, event: dict) -> dict:
+ params = {}
+ resource_props = event.get('ResourceProperties', {})
+ params['lambda_function_arn'] = resource_props.get('LogParser', None)
+ params['lambda_log_partition_function_arn'] = resource_props.get('MoveS3LogsForPartition', None)
+ params['bucket_name'] = resource_props['AppAccessLogBucket']
+ params['lambda_parser'] = resource_props['ScannersProbesLambdaLogParser'] == 'yes'
+ params['athena_parser'] = resource_props['ScannersProbesAthenaLogParser'] == 'yes'
+ if resource_props['AppAccessLogBucketPrefix'] != AWS_LOGS_PATH_PREFIX:
+ params['bucket_prefix'] = resource_props['AppAccessLogBucketPrefix']
+ else:
+ params['bucket_prefix'] = AWS_LOGS_PATH_PREFIX
+ return params
+
+ # ----------------------------------------------------------------------------------------------------------------------
+ # Configure bucket event to call Log Parser whenever a new gz log or athena result file is added to the bucket;
+ # call partition s3 log function whenever athena log parser is chosen and a log file is added to the bucket
+ # ----------------------------------------------------------------------------------------------------------------------
+
+ def add_s3_bucket_lambda_event(self, bucket_name: str, lambda_function_arn: str, lambda_log_partition_function_arn: str, lambda_parser: str,
+ athena_parser: str, bucket_prefix: str) -> None:
+ self.log.info("[add_s3_bucket_lambda_event] Start")
+
+ try:
+ if lambda_function_arn is not None and (lambda_parser or athena_parser):
+ notification_conf = self.s3.get_bucket_notification_configuration(bucket_name)
+
+ self.log.info("[add_s3_bucket_lambda_event] notification_conf:\n %s"
+ % (notification_conf))
+
+ new_conf = {}
+ new_conf['LambdaFunctionConfigurations'] = []
+
+ if 'TopicConfigurations' in notification_conf:
+ new_conf['TopicConfigurations'] = notification_conf['TopicConfigurations']
+
+ if 'QueueConfigurations' in notification_conf:
+ new_conf['QueueConfigurations'] = notification_conf['QueueConfigurations']
+
+ if lambda_parser:
+ new_conf['LambdaFunctionConfigurations'].append({
+ 'Id': 'Call Log Parser',
+ 'LambdaFunctionArn': lambda_function_arn,
+ 'Events': [S3_OBJECT_CREATED],
+ 'Filter': {'Key': {'FilterRules': [{'Name': 'suffix', 'Value': 'gz'}]}}
+ })
+
+ if athena_parser:
+ new_conf['LambdaFunctionConfigurations'].append({
+ 'Id': 'Call Athena Result Parser',
+ 'LambdaFunctionArn': lambda_function_arn,
+ 'Events': [S3_OBJECT_CREATED],
+ 'Filter': {'Key': {'FilterRules': [{'Name': 'prefix', 'Value': 'athena_results/'},
+ {'Name': 'suffix', 'Value': 'csv'}]}}
+ })
+
+ if lambda_log_partition_function_arn is not None:
+ new_conf['LambdaFunctionConfigurations'].append({
+ 'Id': 'Call s3 log partition function',
+ 'LambdaFunctionArn': lambda_log_partition_function_arn,
+ 'Events': [S3_OBJECT_CREATED],
+ 'Filter': {'Key': {
+ 'FilterRules': [{'Name': 'prefix', 'Value': bucket_prefix}, {'Name': 'suffix', 'Value': 'gz'}]}}
+ })
+
+ self.log.info("[add_s3_bucket_lambda_event] LambdaFunctionConfigurations:\n %s"
+ % (new_conf['LambdaFunctionConfigurations']))
+
+ self.s3.put_bucket_notification_configuration(bucket_name=bucket_name, new_conf=new_conf)
+ except Exception as error:
+ self.log.error(error)
+
+ self.log.info("[add_s3_bucket_lambda_event] End")
+
+ def contains_old_app_access_resources(self, event: dict) -> bool:
+ resource_props = event.get('ResourceProperties', {})
+ old_resource_props = event.get('OldResourceProperties', {})
+ old_lambda_app_log_parser_function = old_resource_props.get('LogParser', None)
+ old_lambda_partition_s3_logs_function = old_resource_props.get('MoveS3LogsForPartition', None)
+ old_lambda_parser = old_resource_props['ScannersProbesLambdaLogParser'] == 'yes'
+ old_athena_parser = old_resource_props['ScannersProbesAthenaLogParser'] == 'yes'
+ lambda_log_parser_function = resource_props.get('LogParser', None)
+ lambda_partition_s3_logs_function = resource_props.get('MoveS3LogsForPartition', None)
+ lambda_parser = resource_props['ScannersProbesLambdaLogParser'] == 'yes'
+ athena_parser = resource_props['ScannersProbesAthenaLogParser'] == 'yes'
+
+ return old_resource_props['AppAccessLogBucket'] != resource_props['AppAccessLogBucket'] or \
+ old_lambda_app_log_parser_function != lambda_log_parser_function or \
+ old_lambda_partition_s3_logs_function != lambda_partition_s3_logs_function or \
+ old_lambda_parser != lambda_parser or \
+ old_athena_parser != athena_parser or \
+ ('AppAccessLogBucketPrefix' in resource_props \
+ and ('AppAccessLogBucketPrefix' not in old_resource_props \
+ or old_resource_props['AppAccessLogBucketPrefix'] \
+ != resource_props['AppAccessLogBucketPrefix'] \
+ )
+ )
+
+ def waf_has_old_resources(self, event: dict) -> bool:
+ resource_props = event.get('ResourceProperties', {})
+ old_resource_props = event.get('OldResourceProperties', {})
+ lambda_log_parser_function = resource_props.get('LogParser', None)
+ lambda_parser = resource_props['HttpFloodLambdaLogParser'] == 'yes'
+ athena_parser = resource_props['HttpFloodAthenaLogParser'] == 'yes'
+ old_lambda_app_log_parser_function = old_resource_props.get('LogParser', None)
+ old_lambda_parser = old_resource_props['HttpFloodLambdaLogParser'] == 'yes'
+ old_athena_parser = old_resource_props['HttpFloodAthenaLogParser'] == 'yes'
+ old_waf_bucket = old_resource_props['WafLogBucket']
+ new_waf_bucket = resource_props['WafLogBucket']
+
+ return old_waf_bucket != new_waf_bucket or \
+ old_lambda_app_log_parser_function != lambda_log_parser_function or \
+ old_lambda_parser != lambda_parser or \
+ old_athena_parser != athena_parser
+
+ # ----------------------------------------------------------------------------------------------------------------------
+ # Enable access logging on the App access log bucket
+ # ----------------------------------------------------------------------------------------------------------------------
+ def put_s3_bucket_access_logging(self, bucket_name: str, access_logging_bucket_name: str) -> None:
+ self.log.info("[put_s3_bucket_access_logging] Start")
+
+ response = self.s3.get_bucket_logging(bucket_name)
+
+ # Enable access logging if not already exists
+ if response.get('LoggingEnabled') is None:
+ self.s3.put_bucket_logging(
+ bucket_name=bucket_name,
+ bucket_logging_status={
+ 'LoggingEnabled': {
+ 'TargetBucket': access_logging_bucket_name,
+ 'TargetPrefix': 'AppAccess_Logs/'
+ }
+ }
+ )
+ self.log.info("[put_s3_bucket_access_logging] End")
+
+
+ # ======================================================================================================================
+ # Configure Access Log Bucket
+ # ======================================================================================================================
+ # ----------------------------------------------------------------------------------------------------------------------
+ # Create a bucket (if not exist) and configure an event to call Log Parser lambda funcion when new Access log file is
+ # created (and stored on this S3 bucket).
+ #
+ # This function can raise exception if:
+ # 01. A empty bucket name is used
+ # 02. The bucket already exists and was created in a account that you cant access
+ # 03. The bucket already exists and was created in a different region.
+ # You can't trigger log parser lambda function from another region.
+ #
+ # All those requirements are pre-verified by helper function.
+ # ----------------------------------------------------------------------------------------------------------------------
+ def configure_s3_bucket(self, event: dict) -> None:
+ self.log.info("[configure_s3_bucket] Start")
+
+ region = event['ResourceProperties']['Region']
+ bucket_name = event['ResourceProperties']['AppAccessLogBucket']
+ access_logging_bucket_name = event.get('ResourceProperties', {}).get('AccessLoggingBucket', None)
+
+ if bucket_name.strip() == "":
+ raise EMPTY_BUCKET_NAME_EXCEPTION
+
+ # ------------------------------------------------------------------------------------------------------------------
+ # Create the S3 bucket (if not exist)
+ # ------------------------------------------------------------------------------------------------------------------
+ try:
+ self.s3.head_bucket(bucket_name=bucket_name)
+
+ # Enable access logging if needed
+ if access_logging_bucket_name is not None:
+ self.put_s3_bucket_access_logging(bucket_name, access_logging_bucket_name)
+ except botocore.exceptions.ClientError as e:
+ # If a client error is thrown, then check that it was a 404 error.
+ # If it was a 404 error, then the bucket does not exist.
+ error_code = int(e.response['Error']['Code'])
+ if error_code == 404:
+ self.create_bucket(bucket_name, region, access_logging_bucket_name)
+
+ self.log.info("[configure_s3_bucket] End")
+
+
+ def create_bucket(self, bucket_name: str, region: str, access_logging_bucket_name: str):
+ self.log.info("[configure_s3_bucket]: %s doesn't exist. Create bucket." % bucket_name)
+
+ self.s3.create_bucket(bucket_name, 'private', region)
+
+ # Begin waiting for the S3 bucket, mybucket, to exist
+ self.s3.wait_bucket(bucket_name=bucket_name, waiter_name='bucket_exists')
+
+ # Enable server side encryption on the S3 bucket
+ self.s3.put_bucket_encryption(
+ bucket_name=bucket_name,
+ server_side_encryption_conf={
+ 'Rules': [
+ {
+ 'ApplyServerSideEncryptionByDefault': {
+ 'SSEAlgorithm': 'AES256'
+ }
+ },
+ ]
+ }
+ )
+
+ # block public access
+ self.s3.put_public_access_block(
+ bucket_name=bucket_name,
+ public_access_block_conf={
+ 'BlockPublicAcls': True,
+ 'IgnorePublicAcls': True,
+ 'BlockPublicPolicy': True,
+ 'RestrictPublicBuckets': True
+ }
+ )
+
+ # Enable access logging
+ self.put_s3_bucket_access_logging(bucket_name, access_logging_bucket_name)
+
+ def get_params_bucket_lambda_delete_event(self, event: dict) -> dict:
+ params = {}
+ resource_props = event.get('ResourceProperties', {})
+ params['bucket_name'] = resource_props["WafLogBucket"]
+ params['lambda_function_arn'] = resource_props.get('LogParser', None)
+ params['lambda_log_partition_function_arn'] = None
+ return params
+
+ def get_params_bucket_lambda_update_event(self, event: dict) -> dict:
+ params = {}
+ old_resource_props = event.get('OldResourceProperties', {})
+ params['bucket_name'] = old_resource_props["WafLogBucket"]
+ params['lambda_function_arn'] = old_resource_props.get('LogParser', None)
+ params['lambda_log_partition_function_arn'] = None
+ return params
+
+ def get_params_app_access_delete_event(self, event: dict) -> dict:
+ params = {}
+ resource_props = event.get('ResourceProperties', {})
+ params['bucket_name'] = resource_props["AppAccessLogBucket"]
+ params['lambda_function_arn'] = resource_props.get('LogParser', None)
+ params['lambda_log_partition_function_arn'] = resource_props.get('MoveS3LogsForPartition', None)
+ return params
+
+ def get_params_app_access_update_event(self, event: dict) -> dict:
+ params = {}
+ old_resource_props = event.get('OldResourceProperties', {})
+ params['bucket_name'] = old_resource_props["AppAccessLogBucket"]
+ params['lambda_function_arn'] = old_resource_props.get('LogParser', None)
+ params['lambda_log_partition_function_arn'] = old_resource_props.get('MoveS3LogsForPartition', None)
+ return params
+
+
+ # ----------------------------------------------------------------------------------------------------------------------
+ # Clean access log bucket event
+ # ----------------------------------------------------------------------------------------------------------------------
+ def remove_s3_bucket_lambda_event(self, bucket_name: str, lambda_function_arn: str, lambda_log_partition_function_arn: str) -> None:
+ if not lambda_function_arn:
+ return
+
+ self.log.info("[remove_s3_bucket_lambda_event] Start")
+
+ try:
+ new_conf = {}
+ notification_conf = self.s3.get_bucket_notification_configuration(bucket_name)
+
+ self.log.info("[remove_s3_bucket_lambda_event]notification_conf:\n {notification_conf}")
+
+ if 'TopicConfigurations' in notification_conf:
+ new_conf['TopicConfigurations'] = notification_conf['TopicConfigurations']
+ if 'QueueConfigurations' in notification_conf:
+ new_conf['QueueConfigurations'] = notification_conf['QueueConfigurations']
+
+ if 'LambdaFunctionConfigurations' in notification_conf:
+ new_conf['LambdaFunctionConfigurations'] = []
+ self.update_lambda_config(
+ notification_conf,
+ new_conf,
+ lambda_function_arn,
+ lambda_log_partition_function_arn
+ )
+
+ self.log.info(f"[remove_s3_bucket_lambda_event]new_conf:\n {new_conf}")
+
+ self.s3.put_bucket_notification_configuration(bucket_name, new_conf)
+
+ except Exception as error:
+ self.log.error(
+ "Failed to remove S3 Bucket lambda event. Check if the bucket still exists, you own it and has proper access policy.")
+ self.log.error(str(error))
+
+ self.log.info("[remove_s3_bucket_lambda_event] End")
+
+
+ def update_lambda_config(self, notification_conf: dict, new_conf: dict, lambda_function_arn: str, lambda_log_partition_function_arn: str) -> None:
+ for lfc in notification_conf['LambdaFunctionConfigurations']:
+ if lfc['LambdaFunctionArn'] in {lambda_function_arn, lambda_log_partition_function_arn}:
+ self.log.info("[remove_s3_bucket_lambda_event]%s match found, continue." %lfc['LambdaFunctionArn'])
+ else:
+ new_conf['LambdaFunctionConfigurations'].append(lfc)
+ self.log.info("[remove_s3_bucket_lambda_event]lfc appended: %s" %lfc)
+
+
+ # ======================================================================================================================
+ # Configure AWS WAF Logs
+ # ======================================================================================================================
+ def put_logging_configuration(self, event: dict) -> None:
+ self.log.debug("[waflib:put_logging_configuration] Start")
+
+ self.waflib.put_logging_configuration(
+ log=self.log,
+ web_acl_arn=event['ResourceProperties']['WAFWebACLArn'],
+ delivery_stream_arn=event['ResourceProperties']['DeliveryStreamArn'])
+
+ self.log.debug("[waflib:put_logging_configuration] End")
+
+
+ def delete_logging_configuration(self, event: dict) -> None:
+ self.log.debug("[waflib:delete_logging_configuration] Start")
+
+ self.waflib.delete_logging_configuration(
+ log=self.log,
+ web_acl_arn=event['ResourceProperties']['WAFWebACLArn'])
+
+ self.log.debug("[waflib:delete_logging_configuration] End")
+
+
+ def update_app_log_parser_conf(self, default_conf: dict, app_access_log_bucket: str, remote_file:str ) -> None:
+ try:
+ remote_conf = self.s3.read_json_config_file_from_s3(app_access_log_bucket, remote_file)
+
+ if 'general' in remote_conf and 'errorCodes' in remote_conf['general']:
+ default_conf['general']['errorCodes'] = remote_conf['general']['errorCodes']
+
+ if 'uriList' in remote_conf:
+ default_conf['uriList'] = remote_conf['uriList']
+
+ except Exception as e:
+ self.log.debug("[generate_app_log_parser_conf_file] \tFailed to merge existing conf file data.")
+ self.log.debug(e)
+
+
+ # ======================================================================================================================
+ # Generate Log Parser Config File
+ # ======================================================================================================================
+ def generate_app_log_parser_conf_file(self, event: dict, overwrite: bool) -> None:
+ stack_name = event['ResourceProperties']['StackName']
+ error_threshold = int(event['ResourceProperties']['ErrorThreshold'])
+ block_period = int(event['ResourceProperties']['WAFBlockPeriod'])
+ app_access_log_bucket = event['ResourceProperties']['AppAccessLogBucket']
+
+ self.log.debug("[generate_app_log_parser_conf_file] Start")
+
+ local_file = '/tmp/' + stack_name + '-app_log_conf_LOCAL.json'
+ remote_file = stack_name + '-app_log_conf.json'
+ default_conf = {
+ 'general': {
+ 'errorThreshold': error_threshold,
+ 'blockPeriod': block_period,
+ 'errorCodes': ['400', '401', '403', '404', '405']
+ },
+ 'uriList': {
+ }
+ }
+
+ if not overwrite:
+ self.update_app_log_parser_conf(default_conf, app_access_log_bucket, remote_file)
+
+ with open(local_file, 'w') as outfile:
+ json.dump(default_conf, outfile)
+
+ self.s3.upload_file_to_s3(local_file, app_access_log_bucket, remote_file, extra_args={'ContentType': "application/json"})
+
+ self.log.debug("[generate_app_log_parser_conf_file] End")
+
+
+ def delete_ip_sets(self, event: dict) -> None:
+ resource_props = event['ResourceProperties']
+ scope = os.getenv('SCOPE')
+ if 'WAFWhitelistSetIPV4' in resource_props:
+ self.waflib.delete_ip_set(
+ self.log,
+ scope,
+ resource_props['WAFWhitelistSetIPV4Name'],
+ resource_props['WAFWhitelistSetIPV4'])
+ if 'WAFBlacklistSetIPV4' in resource_props:
+ self.waflib.delete_ip_set(
+ self.log,
+ scope,
+ resource_props['WAFBlacklistSetIPV4Name'],
+ resource_props['WAFBlacklistSetIPV4'])
+ if 'WAFHttpFloodSetIPV4' in resource_props:
+ self.waflib.delete_ip_set(
+ self.log,
+ scope,
+ resource_props['WAFHttpFloodSetIPV4Name'],
+ resource_props['WAFHttpFloodSetIPV4'])
+ if 'WAFScannersProbesSetIPV4' in resource_props:
+ self.waflib.delete_ip_set(
+ self.log,
+ scope,
+ resource_props['WAFScannersProbesSetIPV4Name'],
+ resource_props['WAFScannersProbesSetIPV4'])
+ if 'WAFReputationListsSetIPV4' in resource_props:
+ self.waflib.delete_ip_set(
+ self.log,
+ scope,
+ resource_props['WAFReputationListsSetIPV4Name'],
+ resource_props['WAFReputationListsSetIPV4'])
+ if 'WAFBadBotSetIPV4' in resource_props:
+ self.waflib.delete_ip_set(
+ self.log,
+ scope,
+ resource_props['WAFBadBotSetIPV4Name'],
+ resource_props['WAFBadBotSetIPV4'])
+ if 'WAFWhitelistSetIPV6' in resource_props:
+ self.waflib.delete_ip_set(
+ self.log,
+ scope,
+ resource_props['WAFWhitelistSetIPV6Name'],
+ resource_props['WAFWhitelistSetIPV6'])
+ if 'WAFBlacklistSetIPV6' in resource_props:
+ self.waflib.delete_ip_set(
+ self.log,
+ scope,
+ resource_props['WAFBlacklistSetIPV6Name'],
+ resource_props['WAFBlacklistSetIPV6'])
+ if 'WAFHttpFloodSetIPV6' in resource_props:
+ self.waflib.delete_ip_set(
+ self.log,
+ scope,
+ resource_props['WAFHttpFloodSetIPV6Name'],
+ resource_props['WAFHttpFloodSetIPV6'])
+ if 'WAFScannersProbesSetIPV6' in resource_props:
+ self.waflib.delete_ip_set(
+ self.log,
+ scope,
+ resource_props['WAFScannersProbesSetIPV6Name'],
+ resource_props['WAFScannersProbesSetIPV6'])
+ if 'WAFReputationListsSetIPV6' in resource_props:
+ self.waflib.delete_ip_set(
+ self.log,
+ scope,
+ resource_props['WAFReputationListsSetIPV6Name'],
+ resource_props['WAFReputationListsSetIPV6'])
+ if 'WAFBadBotSetIPV6' in resource_props:
+ self.waflib.delete_ip_set(
+ self.log,
+ scope,
+ resource_props['WAFBadBotSetIPV6Name'],
+ resource_props['WAFBadBotSetIPV6'])
+
+
+ def update_waf_log_parser_conf(self, default_conf: dict, waf_access_log_bucket: str) -> None:
+ try:
+ remote_conf = self.s3.read_json_config_file_from_s3(waf_access_log_bucket, remote_conf)
+
+ if 'general' in remote_conf and 'ignoredSufixes' in remote_conf['general']:
+ default_conf['general']['ignoredSufixes'] = remote_conf['general']['ignoredSufixes']
+
+ if 'uriList' in remote_conf:
+ default_conf['uriList'] = remote_conf['uriList']
+
+ except Exception as e:
+ self.log.debug("[generate_waf_log_parser_conf_file] \tFailed to merge existing conf file data.")
+ self.log.debug(e)
+
+
+ def generate_waf_log_parser_conf_file(self, event: dict, overwrite: bool) -> None:
+ self.log.debug("[generate_waf_log_parser_conf_file] Start")
+
+ resource_props = event['ResourceProperties']
+ stack_name = resource_props['StackName']
+ request_threshold = int(resource_props['RequestThreshold'])
+ block_period = int(resource_props['WAFBlockPeriod'])
+ waf_access_log_bucket = resource_props['WafAccessLogBucket']
+
+ local_file = '/tmp/' + stack_name + '-waf_log_conf_LOCAL.json'
+ remote_file = stack_name + '-waf_log_conf.json'
+ default_conf = {
+ 'general': {
+ 'requestThreshold': request_threshold,
+ 'blockPeriod': block_period,
+ 'ignoredSufixes': []
+ },
+ 'uriList': {
+ }
+ }
+
+ if not overwrite:
+ self.update_waf_log_parser_conf(default_conf, waf_access_log_bucket)
+
+ with open(local_file, 'w') as outfile:
+ json.dump(default_conf, outfile)
+
+ self.s3.upload_file_to_s3(local_file, waf_access_log_bucket, remote_file, extra_args={'ContentType': "application/json"})
+
+ self.log.debug("[generate_waf_log_parser_conf_file] End")
+
+ # ======================================================================================================================
+ # Add Athena Partitions
+ # ======================================================================================================================
+ def add_athena_partitions(self, event: dict) -> None:
+ self.log.info("[add_athena_partitions] Start")
+ resource_props = event['ResourceProperties']
+
+ lambda_client = create_client('lambda')
+ response = lambda_client.invoke(
+ FunctionName=resource_props['AddAthenaPartitionsLambda'].rsplit(":", 1)[-1],
+ Payload="""{
+ "resourceType":"%s",
+ "glueAccessLogsDatabase":"%s",
+ "accessLogBucket":"%s",
+ "glueAppAccessLogsTable":"%s",
+ "glueWafAccessLogsTable":"%s",
+ "wafLogBucket":"%s",
+ "athenaWorkGroup":"%s"
+ }""" % (
+ resource_props['ResourceType'],
+ resource_props['GlueAccessLogsDatabase'],
+ resource_props['AppAccessLogBucket'],
+ resource_props['GlueAppAccessLogsTable'],
+ resource_props['GlueWafAccessLogsTable'],
+ resource_props['WafLogBucket'],
+ resource_props['AthenaWorkGroup']
+ )
+ )
+ self.log.info("[add_athena_partitions] Lambda invocation response:\n%s" % response)
+ self.log.info("[add_athena_partitions] End")
+
+
+ # ======================================================================================================================
+ # Auxiliary Functions
+ # ======================================================================================================================
+ def send_anonymous_usage_data(self, action_type, resource_properties):
+ try:
+ if 'SendAnonymousUsageData' not in resource_properties or resource_properties[
+ 'SendAnonymousUsageData'].lower() != 'yes':
+ return
+ self.log.info("[send_anonymous_usage_data] Start")
+
+ usage_data = {
+ "version": resource_properties['Version'],
+ "data_type": "custom_resource",
+ "region": resource_properties['Region'],
+ "action": action_type,
+ "sql_injection_protection": resource_properties['ActivateSqlInjectionProtectionParam'],
+ "xss_scripting_protection": resource_properties['ActivateCrossSiteScriptingProtectionParam'],
+ "http_flood_protection": resource_properties['ActivateHttpFloodProtectionParam'],
+ "scanners_probes_protection": resource_properties['ActivateScannersProbesProtectionParam'],
+ "reputation_lists_protection": resource_properties['ActivateReputationListsProtectionParam'],
+ "bad_bot_protection": resource_properties['ActivateBadBotProtectionParam'],
+ "existing_api_gateway_badbot_cw_role": resource_properties['ApiGatewayBadBotCWRoleParam'],
+ "request_threshold": resource_properties['RequestThreshold'],
+ "error_threshold": resource_properties['ErrorThreshold'],
+ "waf_block_period": resource_properties['WAFBlockPeriod'],
+ "aws_managed_rules": resource_properties['ActivateAWSManagedRulesParam'],
+ "amr_admin_protection": resource_properties['ActivateAWSManagedAPParam'],
+ "amr_known_bad_input": resource_properties['ActivateAWSManagedKBIParam'],
+ "amr_ip_reputation": resource_properties['ActivateAWSManagedIPRParam'],
+ "amr_anonymous_ip": resource_properties['ActivateAWSManagedAIPParam'],
+ "amr_sql": resource_properties['ActivateAWSManagedSQLParam'],
+ "amr_linux": resource_properties['ActivateAWSManagedLinuxParam'],
+ "amr_posix": resource_properties['ActivateAWSManagedPOSIXParam'],
+ "amr_windows": resource_properties['ActivateAWSManagedWindowsParam'],
+ "amr_php": resource_properties['ActivateAWSManagedPHPParam'],
+ "amr_wordpress": resource_properties['ActivateAWSManagedWPParam'],
+ "keep_original_s3_data": resource_properties['KeepDataInOriginalS3Location'],
+ "allowed_ip_retention_period_minute": resource_properties['IPRetentionPeriodAllowedParam'],
+ "denied_ip_retention_period_minute": resource_properties['IPRetentionPeriodDeniedParam'],
+ "sns_email_notification": resource_properties['SNSEmailParam'],
+ "user_defined_app_access_log_bucket_prefix":
+ resource_properties['UserDefinedAppAccessLogBucketPrefixParam'],
+ "app_access_log_bucket_logging_enabled_by_user":
+ resource_properties['AppAccessLogBucketLoggingStatusParam'],
+ "request_threshold_by_country":
+ resource_properties['RequestThresholdByCountryParam'],
+ "http_flood_athena_query_group_by":
+ resource_properties['HTTPFloodAthenaQueryGroupByParam'],
+ "athena_query_run_time_schedule":
+ resource_properties['AthenaQueryRunTimeScheduleParam'],
+ "provisioner": resource_properties['Provisioner'] if "Provisioner" in resource_properties else "cfn"
+ }
+
+ # --------------------------------------------------------------------------------------------------------------
+ self.log.info("[send_anonymous_usage_data] Send Data")
+ # --------------------------------------------------------------------------------------------------------------
+ response = send_metrics(data=usage_data)
+ response_code = response.status_code
+ self.log.info('[send_anonymous_usage_data] Response Code: {}'.format(response_code))
+ self.log.info("[send_anonymous_usage_data] End")
+
+ except Exception as error:
+ self.log.debug("[send_anonymous_usage_data] Failed to Send Data")
+ self.log.debug(str(error))
diff --git a/source/custom_resource/test/__init__.py b/source/custom_resource/test/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/source/custom_resource/test/conftest.py b/source/custom_resource/test/conftest.py
new file mode 100644
index 00000000..639184b5
--- /dev/null
+++ b/source/custom_resource/test/conftest.py
@@ -0,0 +1,417 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
+import pytest
+import boto3
+from os import environ
+from moto import (
+ mock_s3,
+ mock_logs,
+ mock_wafv2
+)
+
+class Context:
+ def __init__(self, invoked_function_arn, log_group_name, log_stream_name):
+ self.invoked_function_arn = invoked_function_arn
+ self.log_group_name = log_group_name
+ self.log_stream_name = log_stream_name
+
+@pytest.fixture(scope='module', autouse=True)
+def aws_credentials():
+ """Mocked AWS Credentials for moto"""
+ environ['AWS_ACCESS_KEY_ID'] = 'testing'
+ environ['AWS_SECRET_ACCESS_KEY'] = 'testing'
+ environ['AWS_SECURITY_TOKEN'] = 'testing'
+ environ['AWS_SESSION_TOKEN'] = 'testing'
+ environ['AWS_DEFAULT_REGION'] = 'us-east-1'
+ environ['AWS_REGION'] = 'us-east-1'
+
+@pytest.fixture(scope="session")
+def example_context():
+ return Context(':::invoked_function_arn', 'log_group_name', 'log_stream_name')
+
+@pytest.fixture(scope="session")
+def s3_client():
+ with mock_s3():
+ s3 = boto3.client('s3')
+ yield s3
+
+@pytest.fixture(scope="session")
+def s3_bucket(s3_client):
+ my_bucket = 'bucket_name'
+ s3_client.create_bucket(Bucket=my_bucket)
+ return my_bucket
+
+@pytest.fixture(scope="session")
+def cloudwatch_client():
+ with mock_logs():
+ cw_client = boto3.client('logs')
+ yield cw_client
+
+@pytest.fixture(scope="session")
+def wafv2_client():
+ with mock_wafv2():
+ wafv2_client = boto3.client('wafv2')
+ yield wafv2_client
+
+@pytest.fixture(scope="session")
+def configure_cloud_watch_group_retention_event():
+ return {
+ 'LogicalResourceId': 'SetCloudWatchLogGroupRetention',
+ 'RequestId': 'ea233805-3fcc-4cd3-b27b-72ee1de37fd4',
+ 'RequestType': 'Create',
+ 'ResourceProperties': {
+ 'AddAthenaPartitionsLambdaName': 'wafohio-AddAthenaPartitions-ECWYudO8kRMS',
+ 'BadBotParserLambdaName': 'wafohio-BadBotParser-rperXcaWortz',
+ 'CustomResourceLambdaName': 'wafohio-CustomResource-WnfNLnBqtXPF',
+ 'CustomTimerLambdaName': 'wafohio-WebACLStack-1218MNWFWK1BN-CustomTimer-FTgDc0Lar0fj',
+ 'HelperLambdaName': 'wafohio-Helper-QC0crJu0nSgs',
+ 'LogGroupRetention': '150',
+ 'LogParserLambdaName': 'wafohio-LogParser-jjx2HJSF27ji',
+ 'MoveS3LogsForPartitionLambdaName': 'wafohio-MoveS3LogsForPartition-EkJByFiC8sHw',
+ 'RemoveExpiredIPLambdaName': 'wafohio-RemoveExpiredIP-oZSLjeCA8SKF',
+ 'ReputationListsParserLambdaName': 'wafohio-ReputationListsParser-uCaQ9xUSb3O5',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'SetIPRetentionLambdaName': 'wafohio-SetIPRetention-AhUUa7ZMwuIN',
+ 'SolutionVersion': 'v4.0-feature-wiq_integrationtestingfix',
+ 'StackName': 'wafohio'
+ },
+ 'ResourceType': 'Custom::SetCloudWatchLogGroupRetention',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafohio/70c177d0-e2c7-11ed-9e83-02ff465f0e71'
+ }
+
+
+@pytest.fixture(scope="session")
+def configure_app_access_log_bucket_create_event():
+ return {
+ 'LogicalResourceId': 'ConfigureAppAccessLogBucket',
+ 'RequestId': 'ed758acd-e94b-4f2b-9a3a-935efb325f91',
+ 'RequestType': 'Create',
+ 'ResourceProperties': {
+ 'AccessLoggingBucket': 'bucket_name',
+ 'AppAccessLogBucket': 'wiq424231042-wafohio-wiq424231042',
+ 'AppAccessLogBucketPrefix': 'AWSLogs/',
+ 'LogParser': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-LogParser-jjx2HJSF27ji',
+ 'MoveS3LogsForPartition': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-MoveS3LogsForPartition-EkJByFiC8sHw',
+ 'Region': 'us-east-2',
+ 'ScannersProbesAthenaLogParser': 'yes',
+ 'ScannersProbesLambdaLogParser': 'no',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF'
+ },
+ 'ResourceType': 'Custom::ConfigureAppAccessLogBucket',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafohio/70c177d0-e2c7-11ed-9e83-02ff465f0e71'
+ }
+
+@pytest.fixture(scope="session")
+def add_athena_partitions_create_event():
+ return {
+ 'LogicalResourceId': 'CustomAddAthenaPartitions',
+ 'RequestId': 'e0b5586c-b42d-4e64-b637-8d3eb19b1ff5',
+ 'RequestType': 'Create',
+ 'ResourceProperties': {
+ 'AddAthenaPartitionsLambda': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-AddAthenaPartitions-ECWYudO8kRMS',
+ 'AppAccessLogBucket': 'wiq424231042-wafohio-wiq424231042',
+ 'AthenaWorkGroup': 'WAFAddPartitionAthenaQueryWorkGroup-b1af171d-e483-4fbc-a494-43492bfb214a',
+ 'GlueAccessLogsDatabase': 'wafohio_gon4pq',
+ 'GlueAppAccessLogsTable': 'app_access_logs',
+ 'GlueWafAccessLogsTable': 'waf_access_logs',
+ 'ResourceType': 'CustomResource',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'WafLogBucket': 'wafohio-waflogbucket-l1a9qllrsfv4'},
+ 'ResourceType': 'Custom::AddAthenaPartitions',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafohio/70c177d0-e2c7-11ed-9e83-02ff465f0e71'
+ }
+
+@pytest.fixture(scope="session")
+def generate_waf_log_parser_conf_create_event():
+ return {
+ 'LogicalResourceId': 'GenerateWafLogParserConfFile',
+ 'RequestId': '142546c5-25e1-48ca-b35f-51cb0f3c41f0',
+ 'RequestType': 'Create',
+ 'ResourceProperties': {
+ 'RequestThreshold': '100',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafmilan424319-CustomResource-oSlRnpIEvNrS',
+ 'StackName': 'wafmilan424319',
+ 'WAFBlockPeriod': '240',
+ 'WafAccessLogBucket': 'bucket_name'
+ },
+ 'ResourceType': 'Custom::GenerateWafLogParserConfFile',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafmilan424319-CustomResource-oSlRnpIEvNrS',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafmilan424319/3736de70-e2ee-11ed-b571-0a54c0d659fa'
+ }
+
+@pytest.fixture(scope="session")
+def generate_waf_log_parser_conf_update_event():
+ return {
+ 'LogicalResourceId': 'GenerateWafLogParserConfFile',
+ 'RequestId': '142546c5-25e1-48ca-b35f-51cb0f3c41f0',
+ 'RequestType': 'Update',
+ 'ResourceProperties': {
+ 'RequestThreshold': '100',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafmilan424319-CustomResource-oSlRnpIEvNrS',
+ 'StackName': 'wafmilan424319',
+ 'WAFBlockPeriod': '240',
+ 'WafAccessLogBucket': 'bucket_name'
+ },
+ 'ResourceType': 'Custom::GenerateWafLogParserConfFile',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafmilan424319-CustomResource-oSlRnpIEvNrS',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafmilan424319/3736de70-e2ee-11ed-b571-0a54c0d659fa'
+ }
+
+@pytest.fixture(scope="session")
+def generate_app_log_parser_conf_create_event():
+ return {
+ 'LogicalResourceId': 'GenerateAppLogParserConfFile',
+ 'RequestId': '68dde83a-9359-490e-8ddd-dfb513595519',
+ 'RequestType': 'Create',
+ 'ResourceProperties': {
+ 'AppAccessLogBucket': 'bucket_name',
+ 'ErrorThreshold': '50',
+ 'ServiceToken': 'arn:aws:lambda:eu-south-1:XXXXXXXXXXXX:function:wafmilan424319-CustomResource-oSlRnpIEvNrS',
+ 'StackName': 'wafmilan424319',
+ 'WAFBlockPeriod': '240'
+ },
+ 'ResourceType': 'Custom::GenerateAppLogParserConfFile',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-eusouth1.s3.eu-south-1.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:eu-south-1:XXXXXXXXXXXX:function:wafmilan424319-CustomResource-oSlRnpIEvNrS',
+ 'StackId': 'arn:aws:cloudformation:eu-south-1:XXXXXXXXXXXX:stack/wafmilan424319/3736de70-e2ee-11ed-b571-0a54c0d659fa'
+ }
+
+@pytest.fixture(scope="session")
+def generate_app_log_parser_conf_update_event():
+ return {
+ 'LogicalResourceId': 'GenerateAppLogParserConfFile',
+ 'RequestId': '68dde83a-9359-490e-8ddd-dfb513595519',
+ 'RequestType': 'Update',
+ 'ResourceProperties': {
+ 'AppAccessLogBucket': 'bucket_name',
+ 'ErrorThreshold': '50',
+ 'ServiceToken': 'arn:aws:lambda:eu-south-1:XXXXXXXXXXXX:function:wafmilan424319-CustomResource-oSlRnpIEvNrS',
+ 'StackName': 'wafmilan424319',
+ 'WAFBlockPeriod': '240'
+ },
+ 'ResourceType': 'Custom::GenerateAppLogParserConfFile',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-eusouth1.s3.eu-south-1.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:eu-south-1:XXXXXXXXXXXX:function:wafmilan424319-CustomResource-oSlRnpIEvNrS',
+ 'StackId': 'arn:aws:cloudformation:eu-south-1:XXXXXXXXXXXX:stack/wafmilan424319/3736de70-e2ee-11ed-b571-0a54c0d659fa'
+ }
+
+@pytest.fixture(scope="session")
+def configure_aws_waf_logs_create_event():
+ return {
+ 'LogicalResourceId': 'ConfigureAWSWAFLogs',
+ 'RequestId': '25d75d10-c5fa-48da-a79a-d827bfe0a465',
+ 'RequestType': 'Create',
+ 'ResourceProperties': {
+ 'DeliveryStreamArn': 'arn:aws:firehose:us-east-2:XXXXXXXXXXXX:deliverystream/aws-waf-logs-wafohio_xToOQk',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'WAFWebACLArn': 'arn:aws:wafv2:us-east-2:XXXXXXXXXXXX:regional/webacl/wafohio/c2e77a1b-6bb3-4d9d-86f9-0bfd9b6fdcaf'
+ },
+ 'ResourceType': 'Custom::ConfigureAWSWAFLogs',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafohio/70c177d0-e2c7-11ed-9e83-02ff465f0e71'
+ }
+
+@pytest.fixture(scope="session")
+def configure_aws_waf_logs_update_event():
+ return {
+ 'LogicalResourceId': 'ConfigureAWSWAFLogs',
+ 'RequestId': '25d75d10-c5fa-48da-a79a-d827bfe0a465',
+ 'RequestType': 'Update',
+ 'ResourceProperties': {
+ 'DeliveryStreamArn': 'arn:aws:firehose:us-east-2:XXXXXXXXXXXX:deliverystream/aws-waf-logs-wafohio_xToOQk',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'WAFWebACLArn': 'arn:aws:wafv2:us-east-2:XXXXXXXXXXXX:regional/webacl/wafohio/c2e77a1b-6bb3-4d9d-86f9-0bfd9b6fdcaf'
+ },
+ 'ResourceType': 'Custom::ConfigureAWSWAFLogs',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafohio/70c177d0-e2c7-11ed-9e83-02ff465f0e71'
+ }
+
+@pytest.fixture(scope="session")
+def configure_aws_waf_logs_delete_event():
+ return {
+ 'LogicalResourceId': 'ConfigureAWSWAFLogs',
+ 'RequestId': '25d75d10-c5fa-48da-a79a-d827bfe0a465',
+ 'RequestType': 'Delete',
+ 'ResourceProperties': {
+ 'DeliveryStreamArn': 'arn:aws:firehose:us-east-2:XXXXXXXXXXXX:deliverystream/aws-waf-logs-wafohio_xToOQk',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'WAFWebACLArn': 'arn:aws:wafv2:us-east-2:XXXXXXXXXXXX:regional/webacl/wafohio/c2e77a1b-6bb3-4d9d-86f9-0bfd9b6fdcaf'
+ },
+ 'ResourceType': 'Custom::ConfigureAWSWAFLogs',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafohio/70c177d0-e2c7-11ed-9e83-02ff465f0e71'
+ }
+
+@pytest.fixture(scope="session")
+def configure_web_acl_delete():
+ environ['SCOPE'] = 'REGIONAL'
+ return {
+ 'LogicalResourceId': 'ConfigureWebAcl',
+ 'RequestId': 'c11604fb-09d1-4d33-a893-ce58369b24dd',
+ 'RequestType': 'Create',
+ 'ResourceProperties': {
+ 'ActivateAWSManagedAIPParam': 'no',
+ 'ActivateAWSManagedAPParam': 'no',
+ 'ActivateAWSManagedIPRParam': 'yes',
+ 'ActivateAWSManagedKBIParam': 'no',
+ 'ActivateAWSManagedLinuxParam': 'no',
+ 'ActivateAWSManagedPHPParam': 'no',
+ 'ActivateAWSManagedPOSIXParam': 'no',
+ 'ActivateAWSManagedRulesParam': 'no',
+ 'ActivateAWSManagedSQLParam': 'no',
+ 'ActivateAWSManagedWPParam': 'no',
+ 'ActivateAWSManagedWindowsParam': 'no',
+ 'ActivateBadBotProtectionParam': 'yes',
+ 'ActivateCrossSiteScriptingProtectionParam': 'yes',
+ 'ActivateHttpFloodProtectionParam': 'yes - '
+ 'Amazon '
+ 'Athena log '
+ 'parser',
+ 'ActivateReputationListsProtectionParam': 'yes',
+ 'ActivateScannersProbesProtectionParam': 'yes - '
+ 'Amazon '
+ 'Athena '
+ 'log '
+ 'parser',
+ 'ActivateSqlInjectionProtectionParam': 'yes',
+ 'ApiGatewayBadBotCWRoleParam': 'no',
+ 'AppAccessLogBucketLoggingStatusParam': 'yes',
+ 'AthenaQueryRunTimeScheduleParam': '5',
+ 'ErrorThreshold': '50',
+ 'HTTPFloodAthenaQueryGroupByParam': 'None',
+ 'IPRetentionPeriodAllowedParam': '15',
+ 'IPRetentionPeriodDeniedParam': '15',
+ 'KeepDataInOriginalS3Location': 'No',
+ 'Provisioner': 'cfn',
+ 'Region': 'us-east-2',
+ 'RequestThreshold': '100',
+ 'RequestThresholdByCountryParam': 'no',
+ 'SNSEmailParam': 'no',
+ 'SendAnonymousUsageData': 'Yes',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'UUID': 'b1af171d-e483-4fbc-a494-43492bfb214a',
+ 'UserDefinedAppAccessLogBucketPrefixParam': 'no',
+ 'Version': 'v4.0-feature-wiq_integrationtestingfix',
+ 'WAFBadBotSetIPV4': '3fa70158-6584-469a-8ec1-eeb15963752b',
+ 'WAFBadBotSetIPV4Name': 'wafohioIPBadBotSetIPV4',
+ 'WAFBadBotSetIPV6': '165dfaa5-edf8-4ad2-8abe-659495875371',
+ 'WAFBadBotSetIPV6Name': 'wafohioIPBadBotSetIPV6',
+ 'WAFBlacklistSetIPV4': '5f0c3b63-87d0-481e-869d-afc8d80b1f9b',
+ 'WAFBlacklistSetIPV4Name': 'wafohioBlacklistSetIPV4',
+ 'WAFBlacklistSetIPV6': 'aa8d3cb4-d7bc-4ac0-860d-a5214270ebc9',
+ 'WAFBlacklistSetIPV6Name': 'wafohioBlacklistSetIPV6',
+ 'WAFBlockPeriod': '240',
+ 'WAFHttpFloodSetIPV4': '0ce433f3-1d4d-4ab8-a363-312bfeeceab7',
+ 'WAFHttpFloodSetIPV4Name': 'wafohioHTTPFloodSetIPV4',
+ 'WAFHttpFloodSetIPV6': 'bc21f6aa-5d0a-4153-9a8b-b8d78e038ba7',
+ 'WAFHttpFloodSetIPV6Name': 'wafohioHTTPFloodSetIPV6',
+ 'WAFReputationListsSetIPV4': '81039705-5dcd-4c50-bcf1-de4b37e3019d',
+ 'WAFReputationListsSetIPV4Name': 'wafohioIPReputationListsSetIPV4',
+ 'WAFReputationListsSetIPV6': '8238e089-b15e-432d-9983-c8830ffe3cb1',
+ 'WAFReputationListsSetIPV6Name': 'wafohioIPReputationListsSetIPV6',
+ 'WAFScannersProbesSetIPV4': '690ebdd5-d5f3-4755-a3cd-005dddd8b114',
+ 'WAFScannersProbesSetIPV4Name': 'wafohioScannersProbesSetIPV4',
+ 'WAFScannersProbesSetIPV6': 'd304df05-8a0e-46ae-9b43-e4f0360643c3',
+ 'WAFScannersProbesSetIPV6Name': 'wafohioScannersProbesSetIPV6',
+ 'WAFWebACL': 'wafohio|c2e77a1b-6bb3-4d9d-86f9-0bfd9b6fdcaf|REGIONAL',
+ 'WAFWhitelistSetIPV4': '2c0ff79d-f314-40fa-8dab-cd3d0715d478',
+ 'WAFWhitelistSetIPV4Name': 'wafohioWhitelistSetIPV4',
+ 'WAFWhitelistSetIPV6': '6e7dcc41-e6e4-4b44-a2b4-3cfc4278ae52',
+ 'WAFWhitelistSetIPV6Name': 'wafohioWhitelistSetIPV6'},
+ 'ResourceType': 'Custom::ConfigureWebAcl',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafohio/70c177d0-e2c7-11ed-9e83-02ff465f0e71'
+ }
+
+@pytest.fixture(scope="session")
+def configure_waf_log_bucket_create_event():
+ return {
+ 'LogicalResourceId': 'ConfigureWafLogBucket',
+ 'RequestId': '8a93cdcf-bf5f-4a81-89fe-0e7d2e1d4c50',
+ 'RequestType': 'Create',
+ 'ResourceProperties': {
+ 'HttpFloodAthenaLogParser': 'yes',
+ 'HttpFloodLambdaLogParser': 'no',
+ 'LogParser': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-LogParser-jjx2HJSF27ji',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'WafLogBucket': 'wafohio-waflogbucket-l1a9qllrsfv4'
+ },
+ 'ResourceType': 'Custom::ConfigureWafLogBucket',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafohio/70c177d0-e2c7-11ed-9e83-02ff465f0e71'
+ }
+
+@pytest.fixture(scope="session")
+def configure_waf_log_bucket_delete_event():
+ return {
+ 'LogicalResourceId': 'ConfigureWafLogBucket',
+ 'PhysicalResourceId': 'ConfigureWafLogBucket',
+ 'RequestId': '5519325d-9beb-4c68-9ce9-825c8af6e63b',
+ 'RequestType': 'Delete',
+ 'ResourceProperties': {
+ 'HttpFloodAthenaLogParser': 'yes',
+ 'HttpFloodLambdaLogParser': 'no',
+ 'LogParser': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-LogParser-jjx2HJSF27ji',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'WafLogBucket': 'wafohio-waflogbucket-l1a9qllrsfv4'},
+ 'ResourceType': 'Custom::ConfigureWafLogBucket',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafohio/70c177d0-e2c7-11ed-9e83-02ff465f0e71'
+ }
+
+@pytest.fixture(scope="session")
+def successful_response():
+ return '{"StatusCode": "200", "Body": {"message": "success"}}'
+
+@pytest.fixture(scope="session")
+def app_access_log_bucket_create_event_error_response():
+ return '{"statusCode": "500", "body": {"message": "An error occurred (InvalidTargetBucketForLogging) when calling the PutBucketLogging operation: You must give the log-delivery group WRITE and READ_ACP permissions to the target bucket"}}'
+
+@pytest.fixture(scope="session")
+def configure_app_access_log_bucket_delete_event():
+ return {
+ 'LogicalResourceId': 'ConfigureAppAccessLogBucket',
+ 'PhysicalResourceId': 'ConfigureAppAccessLogBucket',
+ 'RequestId': '5bd57115-37d7-448e-8e24-863bd66821f9',
+ 'RequestType': 'Delete',
+ 'ResourceProperties': {
+ 'AppAccessLogBucket': 'wiq424231042-wafohio-wiq424231042',
+ 'AppAccessLogBucketPrefix': 'AWSLogs/',
+ 'LogParser': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-LogParser-jjx2HJSF27ji',
+ 'MoveS3LogsForPartition': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-MoveS3LogsForPartition-EkJByFiC8sHw',
+ 'Region': 'us-east-2',
+ 'ScannersProbesAthenaLogParser': 'yes',
+ 'ScannersProbesLambdaLogParser': 'no',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF'
+ },
+ 'ResourceType': 'Custom::ConfigureAppAccessLogBucket',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafohio/70c177d0-e2c7-11ed-9e83-02ff465f0e71'
+ }
\ No newline at end of file
diff --git a/source/custom_resource/test/test_custom_resource.py b/source/custom_resource/test/test_custom_resource.py
new file mode 100644
index 00000000..c0639167
--- /dev/null
+++ b/source/custom_resource/test/test_custom_resource.py
@@ -0,0 +1,79 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
+from custom_resource.custom_resource import lambda_handler
+
+def test_set_cloud_watch_group_retention(configure_cloud_watch_group_retention_event, example_context, cloudwatch_client, successful_response):
+ result = lambda_handler(configure_cloud_watch_group_retention_event,example_context)
+ expected = successful_response
+ assert result == expected
+
+def test_generate_waf_log_parser_conf_create_event(generate_waf_log_parser_conf_create_event, example_context, wafv2_client, s3_bucket, s3_client, successful_response):
+ result = lambda_handler(generate_waf_log_parser_conf_create_event, example_context)
+ expected = successful_response
+ assert result == expected
+
+def test_generate_waf_log_parser_conf_create_event(generate_waf_log_parser_conf_update_event, example_context, wafv2_client, s3_bucket, s3_client, successful_response):
+ result = lambda_handler(generate_waf_log_parser_conf_update_event, example_context)
+ expected = successful_response
+ assert result == expected
+
+def test_generate_app_log_parser_conf_create_event(generate_app_log_parser_conf_create_event, example_context, wafv2_client, s3_bucket, s3_client, successful_response):
+ result = lambda_handler(generate_app_log_parser_conf_create_event, example_context)
+ expected = successful_response
+ assert result == expected
+
+def test_generate_app_log_parser_conf_update_event(generate_app_log_parser_conf_update_event, example_context, wafv2_client, s3_bucket, s3_client, successful_response):
+ result = lambda_handler(generate_app_log_parser_conf_update_event, example_context)
+ expected = successful_response
+ assert result == expected
+
+def test_configure_aws_waf_logs_create_event(configure_aws_waf_logs_create_event, example_context, wafv2_client, successful_response):
+ result = lambda_handler(configure_aws_waf_logs_create_event, example_context)
+ expected = successful_response
+ assert result == expected
+
+def test_configure_aws_waf_logs_update_event(configure_aws_waf_logs_update_event, example_context, wafv2_client, successful_response):
+ result = lambda_handler(configure_aws_waf_logs_update_event, example_context)
+ expected = successful_response
+ assert result == expected
+
+def test_configure_aws_waf_logs_update_event(configure_aws_waf_logs_delete_event, example_context, wafv2_client, successful_response):
+ result = lambda_handler(configure_aws_waf_logs_delete_event, example_context)
+ expected = successful_response
+ assert result == expected
+
+def test_configure_web_acl_delete(configure_web_acl_delete, example_context, successful_response):
+ result = lambda_handler(configure_web_acl_delete, example_context)
+ expected = successful_response
+ assert result == expected
+
+def test_configure_waf_log_bucket_create_event(configure_waf_log_bucket_create_event, example_context, s3_bucket, s3_client, successful_response):
+ result = lambda_handler(configure_waf_log_bucket_create_event, example_context)
+ expected = successful_response
+ assert result == expected
+
+def test_configure_waf_log_bucket_delete_event(configure_waf_log_bucket_delete_event, example_context, s3_bucket, s3_client, successful_response):
+ result = lambda_handler(configure_waf_log_bucket_delete_event, example_context)
+ expected = successful_response
+ assert result == expected
+
+def test_configure_app_access_log_bucket_create_event(configure_app_access_log_bucket_create_event, example_context, s3_bucket, s3_client, app_access_log_bucket_create_event_error_response):
+ result = lambda_handler(configure_app_access_log_bucket_create_event, example_context)
+ expected = app_access_log_bucket_create_event_error_response
+ assert result == expected
+
+def test_configure_app_access_log_bucket_delete_event(configure_app_access_log_bucket_delete_event, example_context, s3_bucket, s3_client, successful_response):
+ result = lambda_handler(configure_app_access_log_bucket_delete_event, example_context)
+ expected = successful_response
+ assert result == expected
\ No newline at end of file
diff --git a/source/custom_resource/test/test_log_group_retention.py b/source/custom_resource/test/test_log_group_retention.py
new file mode 100644
index 00000000..d6cf159a
--- /dev/null
+++ b/source/custom_resource/test/test_log_group_retention.py
@@ -0,0 +1,76 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
+from log_group_retention import LogGroupRetention
+import logging
+
+log_level = 'DEBUG'
+logging.getLogger().setLevel(log_level)
+log = logging.getLogger('test_log_group_retention')
+
+lgr = LogGroupRetention(log)
+
+def test_truncate_stack_name_empty():
+ stack_name = ''
+ expected = ''
+ res = lgr.truncate_stack_name(stack_name)
+ assert res == expected
+
+
+def test_truncate_stack_name_short():
+ stack_name = 'undertwentychars'
+ expected = 'undertwentychars'
+ res = lgr.truncate_stack_name(stack_name)
+ assert res == expected
+
+
+def test_truncate_stack_name_long():
+ stack_name = 'thisisovertwentycharacts'
+ expected = 'thisisovertwentychar'
+ res = lgr.truncate_stack_name(stack_name)
+ assert res == expected
+
+
+def test_get_log_group_prefix():
+ stack_name = 'stackname'
+ expected = '/aws/lambda/stackname'
+ res = lgr.get_log_group_prefix(stack_name)
+ assert res == expected
+
+
+def test_get_lambda_names():
+ resource_props = {
+ 'CustomResourceLambdaName': 'TESTCustomResourceLambdaName',
+ 'MoveS3LogsForPartitionLambdaName': 'TESTMoveS3LogsForPartitionLambdaName',
+ 'AddAthenaPartitionsLambdaName': 'TESTAddAthenaPartitionsLambdaName',
+ 'SetIPRetentionLambdaName': 'TESTSetIPRetentionLambdaName',
+ 'RemoveExpiredIPLambdaName': 'TESTRemoveExpiredIPLambdaName',
+ 'ReputationListsParserLambdaName': 'TESTReputationListsParserLambdaName',
+ 'BadBotParserLambdaName': 'TESTBadBotParserLambdaName',
+ 'CustomResourceLambdaName': 'TESTCustomResourceLambdaName',
+ 'CustomTimerLambdaName': 'TESTCustomTimerLambdaName',
+ 'RandomProp': 'TESTRandomProp'
+ }
+ expected = {
+ '/aws/lambda/TESTCustomResourceLambdaName',
+ '/aws/lambda/TESTMoveS3LogsForPartitionLambdaName',
+ '/aws/lambda/TESTAddAthenaPartitionsLambdaName',
+ '/aws/lambda/TESTSetIPRetentionLambdaName',
+ '/aws/lambda/TESTRemoveExpiredIPLambdaName',
+ '/aws/lambda/TESTReputationListsParserLambdaName',
+ '/aws/lambda/TESTBadBotParserLambdaName',
+ '/aws/lambda/TESTCustomResourceLambdaName',
+ '/aws/lambda/TESTCustomTimerLambdaName'
+ }
+ res = lgr.get_lambda_names(resource_props)
+ assert res == expected
diff --git a/source/custom_resource/test/test_resource_manager.py b/source/custom_resource/test/test_resource_manager.py
new file mode 100644
index 00000000..b5b9e670
--- /dev/null
+++ b/source/custom_resource/test/test_resource_manager.py
@@ -0,0 +1,278 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
+import logging
+from resource_manager import ResourceManager
+
+log_level = 'DEBUG'
+logging.getLogger().setLevel(log_level)
+log = logging.getLogger('test_resource_manager')
+
+resource_manager = ResourceManager(log)
+
+def test_get_params_waf_event():
+ event = {
+ 'ResourceProperties': {
+ 'WafLogBucket': 'WafLogBucket',
+ 'LogParser': 'LogParser',
+ 'HttpFloodLambdaLogParser': 'no',
+ 'HttpFloodAthenaLogParser': 'yes'
+ }
+ }
+ expected = {
+ 'bucket_name': 'WafLogBucket',
+ 'lambda_function_arn': 'LogParser',
+ 'lambda_log_partition_function_arn': None,
+ 'lambda_parser': False,
+ 'athena_parser': True,
+ 'bucket_prefix': 'AWSLogs/'
+ }
+ res = resource_manager.get_params_waf_event(event)
+ assert expected == res
+
+def test_get_params_waf_event():
+ event = {
+ 'LogicalResourceId': 'ConfigureWafLogBucket',
+ 'RequestId': 'XXXXXXXXXXXX',
+ 'RequestType': 'Create',
+ 'ResourceProperties': {
+ 'HttpFloodAthenaLogParser': 'yes',
+ 'HttpFloodLambdaLogParser': 'no',
+ 'LogParser': 'arn:aws:lambda:eu-south-1:XXXXXXXXXXXX:function:wafmilan419115-LogParser-zouewUuDjyQU',
+ 'ServiceToken': 'arn:aws:lambda:eu-south-1:XXXXXXXXXXXX:function:wafmilan419115-CustomResource-VPiXt5B9MPb3',
+ 'WafLogBucket': 'wafmilan419115-waflogbucket-9qpon138lt2l'
+ },
+ 'ResourceType': 'Custom::ConfigureWafLogBucket',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-eusouth1.s3.eu-south-1.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:eu-south-1:XXXXXXXXXXXX:function:wafmilan419115-CustomResource-VPiXt5B9MPb3',
+ 'StackId': 'arn:aws:cloudformation:eu-south-1:XXXXXXXXXXXX:stack/wafmilan419115/0adf74c0-deef-11ed-9c16-0e4abbb1ce6a'
+ }
+ expected = {
+ 'bucket_name': 'wafmilan419115-waflogbucket-9qpon138lt2l',
+ 'lambda_function_arn': 'arn:aws:lambda:eu-south-1:XXXXXXXXXXXX:function:wafmilan419115-LogParser-zouewUuDjyQU',
+ 'lambda_log_partition_function_arn': None,
+ 'lambda_parser': False,
+ 'athena_parser': True,
+ 'bucket_prefix': 'AWSLogs/'
+ }
+ res = resource_manager.get_params_waf_event(event)
+ assert res == expected
+
+def test_get_params_app_access_update():
+ event = {
+ 'ResourceProperties': {
+ 'AppAccessLogBucket': 'AppAccessLogBucket',
+ 'LogParser': 'LogParser',
+ 'MoveS3LogsForPartition': 'MoveS3LogsForPartition',
+ 'ScannersProbesLambdaLogParser': 'no',
+ 'ScannersProbesAthenaLogParser': 'yes',
+ 'AppAccessLogBucketPrefix': 'prefix/'
+ }
+ }
+ expected = {
+ 'bucket_name': 'AppAccessLogBucket',
+ 'lambda_function_arn': 'LogParser',
+ 'lambda_log_partition_function_arn': 'MoveS3LogsForPartition',
+ 'lambda_parser': False,
+ 'athena_parser': True,
+ 'bucket_prefix': 'prefix/'
+ }
+ res = resource_manager.get_params_app_access_update(event)
+ assert res == expected
+
+def test_get_params_app_access_update_prefix_match():
+ event = {
+ 'ResourceProperties': {
+ 'AppAccessLogBucket': 'AppAccessLogBucket',
+ 'LogParser': 'LogParser',
+ 'MoveS3LogsForPartition': 'MoveS3LogsForPartition',
+ 'ScannersProbesLambdaLogParser': 'no',
+ 'ScannersProbesAthenaLogParser': 'yes',
+ 'AppAccessLogBucketPrefix': 'AWSLogs/'
+ }
+ }
+ expected = {
+ 'bucket_name': 'AppAccessLogBucket',
+ 'lambda_function_arn': 'LogParser',
+ 'lambda_log_partition_function_arn': 'MoveS3LogsForPartition',
+ 'lambda_parser': False,
+ 'athena_parser': True,
+ 'bucket_prefix': 'AWSLogs/'
+ }
+ res = resource_manager.get_params_app_access_update(event)
+ assert res == expected
+
+
+def test_get_params_app_access_create_event():
+ event = {
+ 'ResourceProperties': {
+ 'LogParser': 'LogParser',
+ 'MoveS3LogsForPartition': 'MoveS3LogsForPartition',
+ 'ScannersProbesLambdaLogParser': 'no',
+ 'ScannersProbesAthenaLogParser': 'yes',
+ 'AppAccessLogBucket': 'AppAccessLogBucket',
+ 'AppAccessLogBucketPrefix': 'prefix/'
+ }
+ }
+ expected = {
+ 'lambda_function_arn': 'LogParser',
+ 'lambda_log_partition_function_arn': 'MoveS3LogsForPartition',
+ 'lambda_parser': False,
+ 'athena_parser': True,
+ 'bucket_name': 'AppAccessLogBucket',
+ 'bucket_prefix': 'prefix/'
+ }
+ res = resource_manager.get_params_app_access_create_event(event)
+ assert res == expected
+
+def test_get_params_app_access_create_event_prefix_match():
+ event = {
+ 'ResourceProperties': {
+ 'LogParser': 'LogParser',
+ 'MoveS3LogsForPartition': 'MoveS3LogsForPartition',
+ 'ScannersProbesLambdaLogParser': 'no',
+ 'ScannersProbesAthenaLogParser': 'yes',
+ 'AppAccessLogBucket': 'AppAccessLogBucket',
+ 'AppAccessLogBucketPrefix': 'AWSLogs/'
+ }
+ }
+ expected = {
+ 'lambda_function_arn': 'LogParser',
+ 'lambda_log_partition_function_arn': 'MoveS3LogsForPartition',
+ 'bucket_name': 'AppAccessLogBucket',
+ 'lambda_parser': False,
+ 'athena_parser': True,
+ 'bucket_prefix': 'AWSLogs/'
+ }
+ res = resource_manager.get_params_app_access_create_event(event)
+ assert expected == res
+
+def test_contains_old_app_access_resources():
+ event = {
+ 'ResourceProperties': {
+ 'AppAccessLogBucket': 'AppAccessLogBucket',
+ 'LogParser': 'LogParser',
+ 'MoveS3LogsForPartition': 'MoveS3LogsForPartition',
+ 'ScannersProbesLambdaLogParser': 'no',
+ 'ScannersProbesAthenaLogParser': 'yes',
+ 'AppAccessLogBucketPrefix': 'prefix/'
+ },
+ 'OldResourceProperties': {
+ 'AppAccessLogBucket': 'AppAccessLogBucket',
+ 'LogParser': 'LogParser',
+ 'MoveS3LogsForPartition': 'MoveS3LogsForPartition',
+ 'ScannersProbesLambdaLogParser': 'no',
+ 'ScannersProbesAthenaLogParser': 'yes',
+ }
+ }
+ expected = True
+ res = resource_manager.contains_old_app_access_resources(event)
+ assert res == expected
+
+
+def test_waf_has_old_resources():
+ event = {
+ 'ResourceProperties': {
+ 'LogParser': 'LogParser',
+ 'HttpFloodLambdaLogParser': 'no',
+ 'HttpFloodAthenaLogParser': 'yes',
+ 'WafLogBucket': 'WafLogBucket'
+ },
+ 'OldResourceProperties': {
+ 'LogParser': 'LogParser',
+ 'HttpFloodLambdaLogParser': 'no',
+ 'HttpFloodAthenaLogParser': 'yes',
+ 'WafLogBucket': 'WafLogBucket'
+ }
+ }
+ expected = False
+ res = resource_manager.waf_has_old_resources(event)
+ assert res == expected
+
+def test_get_params_bucket_lambda_delete_event():
+ event = {
+ 'ResourceProperties': {
+ 'WafLogBucket': 'WafLogBucket',
+ 'LogParser': 'LogParser',
+ }
+ }
+ expected = {
+ 'bucket_name': 'WafLogBucket',
+ 'lambda_function_arn': 'LogParser',
+ 'lambda_log_partition_function_arn': None
+ }
+ res = resource_manager.get_params_bucket_lambda_delete_event(event)
+ assert res == expected
+
+def test_get_params_bucket_lambda_update_event():
+ event = {
+ 'OldResourceProperties': {
+ 'WafLogBucket': 'WafLogBucket',
+ 'LogParser': 'LogParser'
+ }
+ }
+ expected = {
+ 'bucket_name': 'WafLogBucket',
+ 'lambda_function_arn': 'LogParser',
+ 'lambda_log_partition_function_arn': None
+ }
+ res = resource_manager.get_params_bucket_lambda_update_event(event)
+ assert res == expected
+
+def test_get_params_app_access_delete_event():
+ event = {
+ 'ResourceProperties': {
+ 'AppAccessLogBucket': 'AppAccessLogBucket',
+ 'LogParser': 'LogParser',
+ 'MoveS3LogsForPartition': 'MoveS3LogsForPartition'
+ }
+ }
+ expected = {
+ 'bucket_name': 'AppAccessLogBucket',
+ 'lambda_function_arn': 'LogParser',
+ 'lambda_log_partition_function_arn': 'MoveS3LogsForPartition'
+ }
+ res = resource_manager.get_params_app_access_delete_event(event)
+ assert res == expected
+
+def test_get_params_app_access_update_event():
+ event = {
+ 'OldResourceProperties': {
+ 'AppAccessLogBucket': 'AppAccessLogBucket',
+ 'LogParser': 'LogParser',
+ 'MoveS3LogsForPartition': 'MoveS3LogsForPartition'
+ }
+ }
+ expected = {
+ 'bucket_name': 'AppAccessLogBucket',
+ 'lambda_function_arn': 'LogParser',
+ 'lambda_log_partition_function_arn': 'MoveS3LogsForPartition'
+ }
+ res = resource_manager.get_params_app_access_update_event(event)
+ assert res == expected
+
+def test_update_lambda_config():
+ toModify = {'LambdaFunctionConfigurations': []}
+ resource_manager.update_lambda_config(
+ notification_conf={
+ 'LambdaFunctionConfigurations': [
+ {'LambdaFunctionArn': 'LambdaFunctionArn'},
+ {'LambdaFunctionArn': 'NoMatch'}
+ ]
+ },
+ new_conf=toModify,
+ lambda_function_arn='LambdaFunctionArn',
+ lambda_log_partition_function_arn=''
+ )
+ expected = {'LambdaFunctionConfigurations': [{'LambdaFunctionArn': 'NoMatch'}]}
+ assert toModify == expected
\ No newline at end of file
diff --git a/source/helper/.coveragerc b/source/helper/.coveragerc
new file mode 100644
index 00000000..3aa79036
--- /dev/null
+++ b/source/helper/.coveragerc
@@ -0,0 +1,29 @@
+[run]
+omit =
+ test/*
+ */__init__.py
+ **/__init__.py
+ backoff/*
+ bin/*
+ boto3/*
+ botocore/*
+ certifi/*
+ charset*/*
+ crhelper*
+ chardet*
+ dateutil/*
+ idna/*
+ jmespath/*
+ lib/*
+ package*
+ python_*
+ requests/*
+ s3transfer/*
+ six*
+ tenacity*
+ tests
+ urllib3/*
+ yaml
+ PyYAML-*
+source =
+ .
\ No newline at end of file
diff --git a/source/helper/__init__.py b/source/helper/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/source/helper/helper.py b/source/helper/helper.py
index 7234e3c8..59dd7e6b 100644
--- a/source/helper/helper.py
+++ b/source/helper/helper.py
@@ -1,5 +1,5 @@
######################################################################################################################
-# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
# #
# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
# with the License. A copy of the License is located at #
@@ -11,230 +11,29 @@
# and limitations under the License. #
######################################################################################################################
-import boto3
-import botocore
import json
-import logging
-import uuid
-import re
-import string
-import random
-import requests
-import os
-from os import environ
-from botocore.config import Config
-from lib.waflibv2 import WAFLIBv2
-from lib.boto3_util import create_client
-
-logging.getLogger().debug('Loading function')
-
-
-# ======================================================================================================================
-# Configure Access Log Bucket
-# ======================================================================================================================
-# ----------------------------------------------------------------------------------------------------------------------
-# Check S3 bucket requirements. This function raises exception if:
-#
-# 01. A empty bucket name is used
-# 02. The bucket already exists and was created in a account that you cant access
-# 03. The bucket already exists and was created in a different region.
-# You can't trigger log parser lambda function from another region.
-# ----------------------------------------------------------------------------------------------------------------------
-def check_app_log_bucket(log, region, bucket_name):
- log.info("[check_app_log_bucket] Start")
-
- if bucket_name.strip() == "":
- raise Exception('Failed to configure access log bucket. Name cannot be empty!')
-
- # ------------------------------------------------------------------------------------------------------------------
- # Check if bucket exists (and inside the specified region)
- # ------------------------------------------------------------------------------------------------------------------
- exists = True
- s3_client = create_client('s3')
- try:
- response = s3_client.head_bucket(Bucket=bucket_name)
- log.info("[check_app_log_bucket]response: \n%s" % response)
-
- except botocore.exceptions.ClientError as e:
- # If a client error is thrown, then check that it was a 404 error.
- # If it was a 404 error, then the bucket does not exist.
- error_code = int(e.response['Error']['Code'])
- if error_code == 404:
- exists = False
- log.info("[check_app_log_bucket]error_code: %s." % error_code)
- # ------------------------------------------------------------------------------------------------------------------
- # Check if the bucket was created in the specified Region or create one (if not exists)
- # ------------------------------------------------------------------------------------------------------------------
- if exists:
- response = None
- try:
- response = s3_client.get_bucket_location(Bucket=bucket_name)
- except Exception as e:
- raise Exception(
- 'Failed to access the existing bucket information. Check if you own this bucket and if it has proper access policy.')
-
- if response['LocationConstraint'] == None:
- response['LocationConstraint'] = 'us-east-1'
- elif response['LocationConstraint'] == 'EU':
- # Fix for github issue #72
- response['LocationConstraint'] = 'eu-west-1'
-
- if response['LocationConstraint'] != region:
- raise Exception(
- 'Bucket located in a different region. S3 bucket and Log Parser Lambda (and therefore, you CloudFormation Stack) must be created in the same Region.')
-
- log.info("[check_app_log_bucket] End")
-
-
-# ======================================================================================================================
-# Check AWS Service Dependencies
-# ======================================================================================================================
-def check_service_dependencies(log, resource_properties):
- log.debug("[check_service_dependencies] Start")
-
- unavailable_services = []
- SCOPE = os.getenv('SCOPE')
- waflib = WAFLIBv2()
- # ------------------------------------------------------------------------------------------------------------------
- # AWS WAF Resource TEST
- # ------------------------------------------------------------------------------------------------------------------
- try:
- waflib.list_web_acls(log, SCOPE)
- except botocore.exceptions.EndpointConnectionError:
- unavailable_services.append('AWS WAF')
- except Exception:
- log.debug("[check_service_dependencies] AWS WAF tested")
-
- # ------------------------------------------------------------------------------------------------------------------
- # Amazon Athena
- # ------------------------------------------------------------------------------------------------------------------
- if resource_properties['AthenaLogParser'] == "yes":
- try:
- athena_client = create_client('athena')
- athena_client.list_named_queries()
- except botocore.exceptions.EndpointConnectionError:
- unavailable_services.append('Amazon Athena')
- except Exception:
- log.debug("[check_service_dependencies] Amazon Athena tested")
-
- # ------------------------------------------------------------------------------------------------------------------
- # AWS Glue
- # ------------------------------------------------------------------------------------------------------------------
- if resource_properties['AthenaLogParser'] == "yes":
- try:
- glue_client = create_client('glue')
- glue_client.get_databases()
- except botocore.exceptions.EndpointConnectionError:
- unavailable_services.append('AWS Glue')
- except Exception:
- log.debug("[check_service_dependencies] AWS Glue")
-
- # ------------------------------------------------------------------------------------------------------------------
- # Amazon Kinesis Data Firehose
- # ------------------------------------------------------------------------------------------------------------------
- if resource_properties['HttpFloodProtectionLogParserActivated'] == "yes":
- try:
- firehose_client = create_client('firehose')
- firehose_client.list_delivery_streams()
- except botocore.exceptions.EndpointConnectionError:
- unavailable_services.append('Amazon Kinesis Data Firehose')
- except Exception:
- log.debug("[check_service_dependencies] Amazon Kinesis Data Firehose tested")
-
- if unavailable_services:
- raise Exception(
- "Failed to access the following service(s): %s. Please check if this region supports all required services: https://amzn.to/2SzWJXj" % '; '.join(
- unavailable_services))
-
- log.debug("[check_service_dependencies] End")
-
-
-def check_requirements(log, resource_properties):
- log.debug("[check_requirements] Start")
-
- # ------------------------------------------------------------------------------------------------------------------
- # Logging Web ACL Traffic for CloudFront distribution
- # ------------------------------------------------------------------------------------------------------------------
- if (resource_properties['HttpFloodProtectionLogParserActivated'] == "yes" and
- resource_properties['EndpointType'].lower() == 'cloudfront' and
- resource_properties['Region'] != 'us-east-1'):
- raise Exception(
- "If you are capturing AWS WAF logs for a Amazon CloudFront distribution, create the stack in US East (N. Virginia). More info: https://amzn.to/2F5L1Ae")
-
- # ------------------------------------------------------------------------------------------------------------------
- # Logging Web ACL Traffic for CloudFront distribution
- # ------------------------------------------------------------------------------------------------------------------
- if (resource_properties['HttpFloodProtectionRateBasedRuleActivated'] == "yes" and
- int(resource_properties['RequestThreshold']) < 100):
- raise Exception(
- "The minimum rate-based rule rate limit per 5 minute period is 100. If need to use values below that, please select AWS Lambda or Amazon Athena log parser.")
-
- log.debug("[check_requirements] End")
-
-
-def send_response(log, event, context, responseStatus, responseData, resourceId, reason=None):
- log.debug("[send_response] Start")
-
- responseUrl = event['ResponseURL']
- cw_logs_url = "https://console.aws.amazon.com/cloudwatch/home?region=%s#logEventViewer:group=%s;stream=%s" % (
- context.invoked_function_arn.split(':')[3], context.log_group_name, context.log_stream_name)
-
- log.info(responseUrl)
- responseBody = {}
- responseBody['Status'] = responseStatus
- responseBody['Reason'] = reason or ('See the details in CloudWatch Logs: ' + cw_logs_url)
- responseBody['PhysicalResourceId'] = resourceId
- responseBody['StackId'] = event['StackId']
- responseBody['RequestId'] = event['RequestId']
- responseBody['LogicalResourceId'] = event['LogicalResourceId']
- responseBody['NoEcho'] = False
- responseBody['Data'] = responseData
-
- json_responseBody = json.dumps(responseBody)
- log.debug("Response body:\n" + json_responseBody)
-
- headers = {
- 'content-type': '',
- 'content-length': str(len(json_responseBody))
- }
-
- try:
- response = requests.put(responseUrl,
- data=json_responseBody,
- headers=headers,
- timeout=600)
- log.debug("Status code: " + response.reason)
-
- except Exception as error:
- log.error("[send_response] Failed executing requests.put(..)")
- log.error(str(error))
-
- log.debug("[send_response] End")
-
+from stack_requirements import StackRequirements
+from lib.cfn_response import send_response
+from lib.logging_util import set_log_level
# ======================================================================================================================
# Lambda Entry Point
# ======================================================================================================================
def lambda_handler(event, context):
- log = logging.getLogger()
+ log = set_log_level()
- responseStatus = 'SUCCESS'
+ response_status = 'SUCCESS'
reason = None
- responseData = {}
- resourceId = event['PhysicalResourceId'] if 'PhysicalResourceId' in event else event['LogicalResourceId']
+ response_data = {}
+ resource_id = event['PhysicalResourceId'] if 'PhysicalResourceId' in event else event['LogicalResourceId']
result = {
'StatusCode': '200',
'Body': {'message': 'success'}
}
- # ------------------------------------------------------------------
- # Set Log Level
- # ------------------------------------------------------------------
- log_level = str(os.getenv('LOG_LEVEL').upper())
- if log_level not in ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']:
- log_level = 'ERROR'
- log.setLevel(log_level)
+ stack_requirements = StackRequirements(log)
+ log.info(f'context: {context}')
try:
# ----------------------------------------------------------
# Read inputs parameters
@@ -246,99 +45,21 @@ def lambda_handler(event, context):
# ----------------------------------------------------------
# Process event
# ----------------------------------------------------------
- if event['ResourceType'] == "Custom::CheckRequirements":
- if 'CREATE' in request_type or 'UPDATE' in request_type:
- check_service_dependencies(log, event['ResourceProperties'])
-
- if event['ResourceProperties']['ProtectionActivatedScannersProbes'] == 'yes':
- check_app_log_bucket(log, event['ResourceProperties']['Region'],
- event['ResourceProperties']['AppAccessLogBucket'])
-
- check_requirements(log, event['ResourceProperties'])
-
- # DELETE: do nothing
-
- elif event['ResourceType'] == "Custom::CreateUUID":
- if 'CREATE' in request_type:
- responseData['UUID'] = str(uuid.uuid4())
- log.debug("UUID: %s" % responseData['UUID'])
-
- # UPDATE: do nothing
- # DELETE: do nothing
-
- elif event['ResourceType'] == "Custom::CreateDeliveryStreamName":
- # --------------------------------------------------------------------------
- # Delivery stream names acceptable characters are:
- # - Uppercase and lowercase letters
- # - Numbers
- # - Underscores
- # - Hyphens
- # - Periods
- # Also:
- # - It must be between 1 and 64 characters long
- # - AWS WAF requires a name starting with the prefix "aws-waf-logs-"
- # --------------------------------------------------------------------------
- if 'CREATE' in request_type:
- prefix = "aws-waf-logs-"
- suffix = ''.join([random.choice(string.ascii_letters + string.digits) for n in range(6)])
- stack_name = event['ResourceProperties']['StackName']
-
- # remove spaces
- stack_name = stack_name.replace(" ", "_")
-
- # remove everything that is not [a-zA-Z0-9] or '_' and strip '_'
- # note: remove hypens and periods for convenience
- stack_name = re.sub(r'\W', '', stack_name).strip('_')
-
- delivery_stream_name = prefix + "_" + suffix
- if len(stack_name) > 0:
- max_len = 64 - len(prefix) - 1 - len(suffix)
- delivery_stream_name = prefix + stack_name[:max_len] + "_" + suffix
-
- responseData['DeliveryStreamName'] = delivery_stream_name
- log.debug("DeliveryStreamName: %s" % responseData['DeliveryStreamName'])
-
- # UPDATE: do nothing
- # DELETE: do nothing
-
- elif event['ResourceType'] == "Custom::CreateGlueDatabaseName":
- # --------------------------------------------------------------------------
- # Delivery stream names acceptable characters are:
- # - Lowercase letters
- # - Numbers
- # - Underscores
- # Also:
- # - It must be between 1 and 32 characters long. Names longer than that
- # break AWS::Athena::NamedQuery database parameter
- # --------------------------------------------------------------------------
- if 'CREATE' in request_type:
- suffix = ''.join([random.choice(string.ascii_letters + string.digits) for n in range(6)]).lower()
- stack_name = event['ResourceProperties']['StackName']
-
- # remove spaces
- stack_name = stack_name.replace(" ", "_")
-
- # remove everything that is not [a-z0-9] or '_' and strip '_'
- stack_name = re.sub(r'\W', '', stack_name).strip('_').lower()
-
- # reduce to max_len (considering random sufix + '_')
- max_len = 32 - 1 - len(suffix)
- stack_name = stack_name[:max_len].strip('_')
+ if event['ResourceType'] == "Custom::CheckRequirements" and request_type in {'CREATE', 'UPDATE'}:
+ stack_requirements.verify_requirements_and_dependencies(event)
- # define database name
- database_name = suffix
- if len(stack_name) > 0:
- database_name = stack_name + '_' + suffix
+ elif event['ResourceType'] == "Custom::CreateUUID" and request_type == 'CREATE':
+ stack_requirements.create_uuid(response_data)
- responseData['DatabaseName'] = database_name
- log.debug("DatabaseName: %s" % responseData['DatabaseName'])
+ elif event['ResourceType'] == "Custom::CreateDeliveryStreamName" and request_type == 'CREATE':
+ stack_requirements.create_delivery_stream_name(event, response_data)
- # UPDATE: do nothing
- # DELETE: do nothing
+ elif event['ResourceType'] == "Custom::CreateGlueDatabaseName" and request_type == 'CREATE':
+ stack_requirements.create_db_name(event, response_data)
except Exception as error:
log.error(error)
- responseStatus = 'FAILED'
+ response_status = 'FAILED'
reason = str(error)
result = {
'statusCode': '400',
@@ -350,6 +71,6 @@ def lambda_handler(event, context):
# Send Result
# ------------------------------------------------------------------
if 'ResponseURL' in event:
- send_response(log, event, context, responseStatus, responseData, resourceId, reason)
+ send_response(log, event, context, response_status, response_data, resource_id, reason)
return json.dumps(result)
diff --git a/source/helper/requirements.txt b/source/helper/requirements.txt
index 4680cb40..81046f69 100644
--- a/source/helper/requirements.txt
+++ b/source/helper/requirements.txt
@@ -1,2 +1,2 @@
-backoff>=2.2.1
-requests>=2.28.2
+backoff~=2.2.1
+requests~=2.28.2
diff --git a/source/helper/requirements_dev.txt b/source/helper/requirements_dev.txt
new file mode 100644
index 00000000..ab317bdd
--- /dev/null
+++ b/source/helper/requirements_dev.txt
@@ -0,0 +1,11 @@
+botocore~=1.29.85
+boto3~=1.26.85
+mock~=5.0.1
+moto~=4.1.4
+pytest~=7.2.2
+pytest-mock~=3.10.0
+pytest-runner~=6.0.0
+freezegun~=1.2.2
+pytest-cov~=4.0.0
+pytest-env~=0.8.1
+pyparsing~=3.0.9
\ No newline at end of file
diff --git a/source/helper/stack_requirements.py b/source/helper/stack_requirements.py
new file mode 100644
index 00000000..8ce8f194
--- /dev/null
+++ b/source/helper/stack_requirements.py
@@ -0,0 +1,213 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
+import botocore
+import string
+import random
+import re
+import uuid
+from lib.s3_util import S3
+
+WAF_FOR_CLOUDFRONT_EXCEPTION_MESSAGE = '''
+ If you are capturing AWS WAF logs for a Amazon CloudFront
+ distribution, create the stack in US East (N. Virginia).'''
+INVALID_FLOOD_THRESHOLD_MESSAGE = '''
+ The minimum rate-based rule rate limit per 5 minute period is 100.
+ If need to use values below that,
+ please select AWS Lambda or Amazon Athena log parser.'''
+EMPTY_S3_BUCKET_NAME_EXCEPTION_MESSAGE = '''
+ Failed to configure access log bucket. Name cannot be empty!'''
+ACCESS_ISSUE_S3_BUCKET_EXCEPTION_MESSAGE = '''
+ Failed to access the existing bucket information.
+ Check if you own this bucket and if it has proper access policy.'''
+INCORRECT_REGION_S3_LAMBDA_EXCEPTION_MESSAGE = '''
+ Bucket located in a different region. S3 bucket and Log Parser Lambda
+ (and therefore, you CloudFormation Stack) must be created in the same Region.'''
+
+EMPTY_S3_BUCKET_NAME_EXCEPTION = Exception(EMPTY_S3_BUCKET_NAME_EXCEPTION_MESSAGE)
+ACCESS_ISSUE_S3_BUCKET_EXCEPTION = Exception(ACCESS_ISSUE_S3_BUCKET_EXCEPTION_MESSAGE)
+INCORRECT_REGION_S3_LAMBDA_EXCEPTION = Exception(INCORRECT_REGION_S3_LAMBDA_EXCEPTION_MESSAGE)
+WAF_FOR_CLOUDFRONT_EXCEPTION = Exception(WAF_FOR_CLOUDFRONT_EXCEPTION_MESSAGE)
+INVALID_FLOOD_THRESHOLD_EXCEPTION = Exception(INVALID_FLOOD_THRESHOLD_MESSAGE)
+
+class StackRequirements:
+
+ def __init__(self, log):
+ self.log = log
+ self.s3 = S3(log)
+
+
+ # --------------------------------------------------------------------------
+ # Delivery stream names acceptable characters are:
+ # - Lowercase letters
+ # - Numbers
+ # - Underscores
+ # Also:
+ # - It must be between 1 and 32 characters long. Names longer than that
+ # break AWS::Athena::NamedQuery database parameter
+ # --------------------------------------------------------------------------
+ def create_db_name(self, event: dict, response_data: dict) -> None:
+ suffix = self.generate_suffix().lower()
+ stack_name = self.normalize_stack_name(event['ResourceProperties']['StackName'], suffix)
+
+ # define database name
+ database_name = suffix
+ if len(stack_name) > 0:
+ database_name = stack_name + '_' + suffix
+
+ response_data['DatabaseName'] = database_name
+ self.log.debug(f"DatabaseName: {response_data['DatabaseName']}")
+
+
+ def create_uuid(self, response_data: dict) -> None:
+ response_data['UUID'] = str(uuid.uuid4())
+ self.log.debug(f"UUID: {response_data['UUID']}")
+
+
+ # --------------------------------------------------------------------------
+ # Delivery stream names acceptable characters are:
+ # - Uppercase and lowercase letters
+ # - Numbers
+ # - Underscores
+ # - Hyphens
+ # - Periods
+ # Also:
+ # - It must be between 1 and 64 characters long
+ # - AWS WAF requires a name starting with the prefix "aws-waf-logs-"
+ # --------------------------------------------------------------------------
+ def create_delivery_stream_name(self, event: dict, response_data: dict) -> None:
+ prefix = "aws-waf-logs-"
+ suffix = self.generate_suffix()
+ stack_name = event['ResourceProperties']['StackName']
+
+ stack_name = stack_name.replace(" ", "_")
+
+ # remove everything that is not [a-zA-Z0-9] or '_' and strip '_'
+ # note: remove hypens and periods for convenience
+ stack_name = re.sub(r'\W', '', stack_name).strip('_')
+
+ delivery_stream_name = prefix + "_" + suffix
+ if len(stack_name) > 0:
+ max_len = 64 - len(prefix) - 1 - len(suffix)
+ delivery_stream_name = prefix + stack_name[:max_len] + "_" + suffix
+
+ response_data['DeliveryStreamName'] = delivery_stream_name
+ self.log.debug(f"DeliveryStreamName: {response_data['DeliveryStreamName']}")
+
+
+ def verify_requirements_and_dependencies(self, event: dict):
+ if self.is_active_scanner_probes_protection(event):
+ self.check_app_log_bucket(
+ region=event['ResourceProperties']['Region'],
+ bucket_name=event['ResourceProperties']['AppAccessLogBucket']
+ )
+
+ self.check_requirements(event['ResourceProperties'])
+
+
+ def is_active_scanner_probes_protection(self, event: dict) -> bool:
+ return event['ResourceProperties']['ProtectionActivatedScannersProbes'] == 'yes'
+
+
+ # ======================================================================================================================
+ # Configure Access Log Bucket
+ # ======================================================================================================================
+ # ----------------------------------------------------------------------------------------------------------------------
+ # Check S3 bucket requirements. This function raises exception if:
+ #
+ # 01. A empty bucket name is used
+ # 02. The bucket already exists and was created in a account that you cant access
+ # 03. The bucket already exists and was created in a different region.
+ # You can't trigger log parser lambda function from another region.
+ # ----------------------------------------------------------------------------------------------------------------------
+ def check_app_log_bucket(self, region: str, bucket_name: str) -> None:
+ self.log.info("[check_app_log_bucket] Start")
+
+ if bucket_name.strip() == "":
+ raise EMPTY_S3_BUCKET_NAME_EXCEPTION
+
+ exists = self.verify_bucket_existence(bucket_name)
+
+ if not exists:
+ return
+
+ self.verify_bucket_region(bucket_name, region)
+
+
+ def verify_bucket_region(self, bucket_name: str, region: str) -> None:
+ response = None
+ try:
+ response = self.s3.get_bucket_location(bucket_name)
+ except Exception:
+ raise ACCESS_ISSUE_S3_BUCKET_EXCEPTION
+
+ if response['LocationConstraint'] == None:
+ response['LocationConstraint'] = 'us-east-1'
+ elif response['LocationConstraint'] == 'EU':
+ response['LocationConstraint'] = 'eu-west-1'
+
+ if response['LocationConstraint'] != region:
+ raise INCORRECT_REGION_S3_LAMBDA_EXCEPTION
+
+
+ def verify_bucket_existence(self, bucket_name: str) -> bool:
+ try:
+ self.s3.head_bucket(bucket_name)
+
+ except botocore.exceptions.ClientError as e:
+ # If a client error is thrown, then check that it was a 404 error.
+ # If it was a 404 error, then the bucket does not exist.
+ error_code = int(e.response['Error']['Code'])
+ self.log.info(f"[check_app_log_bucket]error_code: {error_code}. Bucket {bucket_name} doesn't exist")
+ if error_code == 404:
+ return False
+
+
+ def check_requirements(self, resource_properties: dict) -> None:
+ self.log.debug("[check_requirements] Start")
+
+ if self.is_waf_for_cloudfront(resource_properties):
+ raise WAF_FOR_CLOUDFRONT_EXCEPTION
+
+ if self.is_invalid_flood_threshold(resource_properties):
+ raise INVALID_FLOOD_THRESHOLD_EXCEPTION
+
+ self.log.debug("[check_requirements] End")
+
+
+ def is_waf_for_cloudfront(self, resource_properties: dict) -> bool:
+ return resource_properties['HttpFloodProtectionLogParserActivated'] == "yes" and \
+ resource_properties['EndpointType'].lower() == 'cloudfront' and \
+ resource_properties['Region'] != 'us-east-1'
+
+
+ def is_invalid_flood_threshold(self, resource_properties: dict) -> bool:
+ return resource_properties['HttpFloodProtectionRateBasedRuleActivated'] == "yes" and \
+ int(resource_properties['RequestThreshold']) < 100
+
+
+ def generate_suffix(self) -> str:
+ return ''.join([ random.choice(string.ascii_letters + string.digits) for _ in range(6) ])
+
+
+ def normalize_stack_name(self, stack_name, suffix) -> str:
+ # remove spaces
+ stack_name = stack_name.replace(" ", "_")
+
+ # remove everything that is not [a-z0-9] or '_' and strip '_'
+ stack_name = re.sub(r'\W', '', stack_name).strip('_').lower()
+
+ # reduce to max_len (considering random sufix + '_')
+ max_len = 32 - 1 - len(suffix)
+ stack_name = stack_name[:max_len].strip('_')
+ return stack_name
\ No newline at end of file
diff --git a/source/helper/test/__init__.py b/source/helper/test/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/source/helper/test/conftest.py b/source/helper/test/conftest.py
new file mode 100644
index 00000000..3866670e
--- /dev/null
+++ b/source/helper/test/conftest.py
@@ -0,0 +1,136 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
+import pytest
+import boto3
+from moto import (
+ mock_s3
+)
+class Context:
+ def __init__(self, invoked_function_arn, log_group_name, log_stream_name):
+ self.invoked_function_arn = invoked_function_arn
+ self.log_group_name = log_group_name
+ self.log_stream_name = log_stream_name
+
+@pytest.fixture(scope="session")
+def example_context():
+ return Context(':::invoked_function_arn', 'log_group_name', 'log_stream_name')
+
+@pytest.fixture(scope="session")
+def successful_response():
+ return '{"StatusCode": "200", "Body": {"message": "success"}}'
+
+@pytest.fixture(scope="session")
+def error_response():
+ return '{"statusCode": "400", "body": {"message": "\'Region\'"}}'
+
+@pytest.fixture(scope="session")
+def s3_client():
+ with mock_s3():
+ s3 = boto3.client('s3')
+ yield s3
+
+@pytest.fixture(scope="session")
+def s3_bucket(s3_client):
+ my_bucket = 'bucket_name'
+ s3_client.create_bucket(Bucket=my_bucket)
+ return my_bucket
+
+@pytest.fixture(scope="session")
+def check_requirements_event():
+ return {
+ 'LogicalResourceId': 'CheckRequirements',
+ 'RequestId': 'cf0d8086-5b6f-4758-a323-e723925fcb30',
+ 'RequestType': 'Create',
+ 'ResourceProperties': {
+ 'AppAccessLogBucket': 'wiq-wafohio424243-wafohio424243',
+ 'AthenaLogParser': 'yes',
+ 'EndpointType': 'ALB',
+ 'HttpFloodProtectionLogParserActivated': 'yes',
+ 'HttpFloodProtectionRateBasedRuleActivated': 'no',
+ 'ProtectionActivatedScannersProbes': 'yes',
+ 'Region': 'us-east-2',
+ 'RequestThreshold': '100',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio424243-Helper-xse5nh2WeWlc'},
+ 'ResourceType': 'Custom::CheckRequirements',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio424243-Helper-xse5nh2WeWlc',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafohio424243/276aee50-e2e9-11ed-89eb-067ac5804c7f'
+ }
+
+@pytest.fixture(scope="session")
+def create_uuid_event():
+ return {
+ 'LogicalResourceId': 'CreateUniqueID',
+ 'RequestId': 'f84694a1-87c0-4ad8-b483-f7b87147514f',
+ 'RequestType': 'Create',
+ 'ResourceProperties': {
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio424243-Helper-xse5nh2WeWlc'},
+ 'ResourceType': 'Custom::CreateUUID',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio424243-Helper-xse5nh2WeWlc',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafohio424243/276aee50-e2e9-11ed-89eb-067ac5804c7f'
+ }
+
+@pytest.fixture(scope="session")
+def create_delivery_stream_name_event():
+ return {
+ 'LogicalResourceId': 'CreateDeliveryStreamName',
+ 'RequestId': '323e36d8-d20b-446f-9b89-7a7895a30fab',
+ 'RequestType': 'Create',
+ 'ResourceProperties': {
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio424243-Helper-xse5nh2WeWlc',
+ 'StackName': 'wafohio424243'
+ },
+ 'ResourceType': 'Custom::CreateDeliveryStreamName',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio424243-Helper-xse5nh2WeWlc',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafohio424243/276aee50-e2e9-11ed-89eb-067ac5804c7f'
+ }
+
+
+@pytest.fixture(scope="session")
+def create_db_name_event():
+ return {
+ 'LogicalResourceId': 'CreateGlueDatabaseName',
+ 'RequestId': 'e5a8e6c9-3f75-4da9-bcce-c0ac3d2ba823',
+ 'RequestType': 'Create',
+ 'ResourceProperties': {
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio424243-Helper-xse5nh2WeWlc',
+ 'StackName': 'wafohio424243'
+ },
+ 'ResourceType': 'Custom::CreateGlueDatabaseName',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio424243-Helper-xse5nh2WeWlc',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafohio424243/276aee50-e2e9-11ed-89eb-067ac5804c7f'
+ }
+
+@pytest.fixture(scope="session")
+def erroneous_check_requirements_event():
+ return {
+ 'LogicalResourceId': 'CheckRequirements',
+ 'RequestId': 'cf0d8086-5b6f-4758-a323-e723925fcb30',
+ 'RequestType': 'Create',
+ 'ResourceProperties': {
+ 'AthenaLogParser': 'yes',
+ 'EndpointType': 'ALB',
+ 'HttpFloodProtectionLogParserActivated': 'yes',
+ 'HttpFloodProtectionRateBasedRuleActivated': 'no',
+ 'ProtectionActivatedScannersProbes': 'yes',
+ 'RequestThreshold': '100',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio424243-Helper-xse5nh2WeWlc'},
+ 'ResourceType': 'Custom::CheckRequirements',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio424243-Helper-xse5nh2WeWlc',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafohio424243/276aee50-e2e9-11ed-89eb-067ac5804c7f'
+ }
diff --git a/source/helper/test/test_helper.py b/source/helper/test/test_helper.py
new file mode 100644
index 00000000..4e4fb144
--- /dev/null
+++ b/source/helper/test/test_helper.py
@@ -0,0 +1,40 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
+from helper.helper import lambda_handler
+
+def test_check_requirements(check_requirements_event, example_context, successful_response):
+ result = lambda_handler(check_requirements_event, example_context)
+ expected = successful_response
+ assert result == expected
+
+def test_create_uuid(create_uuid_event, example_context, successful_response):
+ result = lambda_handler(create_uuid_event, example_context)
+ expected = successful_response
+ assert result == expected
+
+def test_create_delivery_stream_name_event(create_delivery_stream_name_event, example_context, successful_response):
+ result = lambda_handler(create_delivery_stream_name_event, example_context)
+ expected = successful_response
+ assert result == expected
+
+def test_create_db_name(create_db_name_event, example_context, successful_response):
+ result = lambda_handler(create_db_name_event, example_context)
+ expected = successful_response
+ assert result == expected
+
+def test_error(erroneous_check_requirements_event, example_context, error_response):
+ result = lambda_handler(erroneous_check_requirements_event, example_context)
+ expected = error_response
+ assert result == expected
+
\ No newline at end of file
diff --git a/source/helper/test/test_stack_requirements.py b/source/helper/test/test_stack_requirements.py
new file mode 100644
index 00000000..978ba91e
--- /dev/null
+++ b/source/helper/test/test_stack_requirements.py
@@ -0,0 +1,155 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
+from helper.stack_requirements import (
+ StackRequirements,
+ WAF_FOR_CLOUDFRONT_EXCEPTION_MESSAGE,
+ INVALID_FLOOD_THRESHOLD_MESSAGE,
+ EMPTY_S3_BUCKET_NAME_EXCEPTION_MESSAGE,
+ INCORRECT_REGION_S3_LAMBDA_EXCEPTION_MESSAGE,
+ ACCESS_ISSUE_S3_BUCKET_EXCEPTION_MESSAGE
+)
+from moto import (
+ mock_s3
+)
+from uuid import UUID
+from lib.boto3_util import create_client
+import logging
+import boto3
+
+
+log_level = 'DEBUG'
+logging.getLogger().setLevel(log_level)
+log = logging.getLogger('test_help')
+
+stack_requirements = StackRequirements(log=log)
+
+
+def test_create_delivery_stream_name():
+ event = {
+ 'ResourceProperties': {
+ 'StackName': 'stack-name'
+ }
+ }
+ response_data = {}
+ stack_requirements.create_delivery_stream_name(event, response_data)
+
+ expected = 'aws-waf-logs-stackname'
+ # ingore randomly generated 7 char suffix
+ assert response_data['DeliveryStreamName'][:-7] == expected
+
+
+def test_normalize_stack_name():
+ stack_name = 'test stack name_)(just over thirty two characters'
+ suffix = 'adsf13'
+ expected = 'test_stack_name_just_over'
+
+ res = stack_requirements.normalize_stack_name(stack_name, suffix)
+
+ assert res == expected
+
+def test_create_db_name():
+ event = {
+ 'ResourceProperties': {
+ 'StackName': 'stack_name'
+ }
+ }
+ response_data = {}
+ expected = 'stack_name'
+ stack_requirements.create_db_name(event, response_data)
+
+ # ingore randomly generated 7 char suffix
+ assert response_data['DatabaseName'][:-7] == expected
+
+
+def test_create_uuid():
+ response_data = {}
+ stack_requirements.create_uuid(response_data)
+ try:
+ UUID(response_data['UUID'], version=4)
+ assert True
+ except ValueError:
+ assert False
+
+
+def test_check_app_log_bucket_empty_bucket_name_exception():
+ expected = EMPTY_S3_BUCKET_NAME_EXCEPTION_MESSAGE
+ try:
+ stack_requirements.check_app_log_bucket(region='us-east-1', bucket_name="")
+ except Exception as e:
+ assert str(e) == expected
+
+
+@mock_s3
+def test_check_app_log_bucket():
+ conn = boto3.resource("s3", region_name="us-east-1")
+ conn.create_bucket(Bucket="mybucket")
+
+ expected = INCORRECT_REGION_S3_LAMBDA_EXCEPTION_MESSAGE
+ try:
+ stack_requirements.check_app_log_bucket(region='us-east-2', bucket_name="mybucket")
+ except Exception as e:
+ assert str(e) == expected
+
+
+@mock_s3
+def test_verify_bucket_region_access_issue():
+ region = 'us-east-1'
+ conn = boto3.resource("s3", region_name=region)
+ conn.create_bucket(Bucket="mybucket1")
+
+ expected = ACCESS_ISSUE_S3_BUCKET_EXCEPTION_MESSAGE
+ try:
+ stack_requirements.verify_bucket_region(
+ bucket_name='nonexistent',
+ region=region)
+ except Exception as e:
+ assert str(e) == expected
+
+
+def test_check_requirements_invalid_flood_threshold():
+ resource_properties = {
+ 'HttpFloodProtectionLogParserActivated': "yes",
+ 'HttpFloodProtectionRateBasedRuleActivated': "yes",
+ 'EndpointType': 'cloudfront',
+ 'Region': 'us-east-1',
+ 'RequestThreshold': '10'
+ }
+ expected = INVALID_FLOOD_THRESHOLD_MESSAGE
+
+ try:
+ stack_requirements.check_requirements(resource_properties)
+ except Exception as e:
+ assert str(e) == expected
+
+
+def test_is_waf_for_cloudfront():
+ resource_properties = {
+ 'HttpFloodProtectionLogParserActivated': "yes",
+ 'EndpointType': 'cloudfront',
+ 'Region': 'us-east-2'
+ }
+ expected = True
+ res = stack_requirements.is_waf_for_cloudfront(resource_properties)
+ assert res == expected
+
+
+
+def test_is_invalid_flood_threshold():
+ resource_properties = {
+ 'HttpFloodProtectionRateBasedRuleActivated': "yes",
+ 'RequestThreshold': '10'
+ }
+ expected = True
+ res = stack_requirements.is_invalid_flood_threshold(resource_properties)
+ assert res == expected
diff --git a/source/ip_retention_handler/.coveragerc b/source/ip_retention_handler/.coveragerc
new file mode 100644
index 00000000..3aa79036
--- /dev/null
+++ b/source/ip_retention_handler/.coveragerc
@@ -0,0 +1,29 @@
+[run]
+omit =
+ test/*
+ */__init__.py
+ **/__init__.py
+ backoff/*
+ bin/*
+ boto3/*
+ botocore/*
+ certifi/*
+ charset*/*
+ crhelper*
+ chardet*
+ dateutil/*
+ idna/*
+ jmespath/*
+ lib/*
+ package*
+ python_*
+ requests/*
+ s3transfer/*
+ six*
+ tenacity*
+ tests
+ urllib3/*
+ yaml
+ PyYAML-*
+source =
+ .
\ No newline at end of file
diff --git a/source/ip_retention_handler/remove_expired_ip.py b/source/ip_retention_handler/remove_expired_ip.py
index 2b78421c..0e20dde7 100644
--- a/source/ip_retention_handler/remove_expired_ip.py
+++ b/source/ip_retention_handler/remove_expired_ip.py
@@ -11,8 +11,6 @@
# and limitations under the License. #
######################################################################################################################
-import json
-import logging
from time import sleep
from os import environ
from datetime import datetime
@@ -20,6 +18,7 @@
from lib.waflibv2 import WAFLIBv2
from lib.sns_util import SNS
from lib.solution_metrics import send_metrics
+from lib.logging_util import set_log_level
waflib = WAFLIBv2()
@@ -144,8 +143,8 @@ def send_notification(self, log, topic_arn, ip_set_name, ip_set_id, ip_retention
notify = SNS(log)
- subject = "AWS WAF Security Automations - IP Expiration Notification"
- message = "You are receiving this email because you have configured IP retention in AWS WAF Security Automations. " \
+ subject = "Security Automations for AWS WAF - IP Expiration Notification"
+ message = "You are receiving this email because you have configured IP retention in Security Automations for AWS WAF. " \
"Expired IPs have been removed from the following IP set. For details, locate and view {} lambda logs using the " \
"timestamp below. \n\n" \
"IP set name: {}\n IP set id: {}\n IP retention period (minute): {}\n Region: {}\n UTC Time: {}" \
@@ -183,6 +182,7 @@ def send_anonymous_usage_data(self, log, remove_ip_list, name):
"ip_set": ip_set,
"lambda_invocation_count": 1,
"sns_email_notification": environ.get('SNS_EMAIL'),
+ "provisioner": environ.get('provisioner') if "provisioner" in environ else "cfn"
}
log.info("[remove_expired_ip: send_anonymous_usage_data] Send Data")
@@ -198,15 +198,9 @@ def lambda_handler(event, context):
It is triggered by TTL DynamoDB Stream.
"""
- log = logging.getLogger()
+ log = set_log_level()
try:
- # Set Log Level
- log_level = str(environ['LOG_LEVEL'].upper())
- if log_level not in ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']:
- log_level = 'ERROR'
- log.setLevel(log_level)
-
log.info('[remove_expired_id: lambda_handler] Start')
log.info("Lambda Handler Event: \n{}".format(event))
diff --git a/source/ip_retention_handler/requirements.txt b/source/ip_retention_handler/requirements.txt
index 511213cc..635b9d03 100644
--- a/source/ip_retention_handler/requirements.txt
+++ b/source/ip_retention_handler/requirements.txt
@@ -1,2 +1,2 @@
-requests>=2.28.2
-backoff>=2.2.1
\ No newline at end of file
+requests~=2.28.2
+backoff~=2.2.1
\ No newline at end of file
diff --git a/source/ip_retention_handler/requirements_dev.txt b/source/ip_retention_handler/requirements_dev.txt
new file mode 100644
index 00000000..1f9e6301
--- /dev/null
+++ b/source/ip_retention_handler/requirements_dev.txt
@@ -0,0 +1,10 @@
+botocore~=1.29.85
+boto3~=1.26.85
+mock~=5.0.1
+moto~=4.1.4
+pytest~=7.2.2
+pytest-mock~=3.10.0
+pytest-runner~=6.0.0
+freezegun~=1.2.2
+pytest-cov~=4.0.0
+pytest-env~=0.8.1
\ No newline at end of file
diff --git a/source/ip_retention_handler/set_ip_retention.py b/source/ip_retention_handler/set_ip_retention.py
index 18ad4444..4d5d462d 100644
--- a/source/ip_retention_handler/set_ip_retention.py
+++ b/source/ip_retention_handler/set_ip_retention.py
@@ -11,11 +11,11 @@
# and limitations under the License. #
######################################################################################################################
-import logging
from os import environ
from calendar import timegm
from datetime import datetime, timedelta
from lib.dynamodb_util import DDB
+from lib.logging_util import set_log_level
class SetIPRetention(object):
"""
@@ -59,9 +59,9 @@ def make_item(self, event):
item = {}
request_parameters = self.is_none(event.get('requestParameters', {}))
- ip_retention_period = int(environ.get('IP_RETENTION_PEROID_ALLOWED_MINUTE')) \
+ ip_retention_period = int(environ.get('IP_RETENTION_PERIOD_ALLOWED_MINUTE')) \
if self.is_none(str(request_parameters.get('name')).find('Whitelist')) != -1 \
- else int(environ.get('IP_RETENTION_PEROID_DENIED_MINUTE'))
+ else int(environ.get('IP_RETENTION_PERIOD_DENIED_MINUTE'))
# If retention period is not set, stop and return
if ip_retention_period == -1:
@@ -114,21 +114,15 @@ def put_item(self, table_name):
return response
-def lambda_handler(event, context):
+def lambda_handler(event, _):
"""
- Invoke functions to put ip retentation info into ddb table.
+ Invoke functions to put ip retention info into ddb table.
It is triggered by a CloudWatch events rule.
"""
- log = logging.getLogger()
+ log = set_log_level()
try:
- # Set Log Level
- log_level = str(environ['LOG_LEVEL'].upper())
- if log_level not in ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']:
- log_level = 'ERROR'
- log.setLevel(log_level)
-
log.info('[set_ip_retention: lambda_handler] Start')
log.info("Lambda Handler Event: \n{}".format(event))
diff --git a/source/ip_retention_handler/test/conftest.py b/source/ip_retention_handler/test/conftest.py
new file mode 100644
index 00000000..8bfe210a
--- /dev/null
+++ b/source/ip_retention_handler/test/conftest.py
@@ -0,0 +1,126 @@
+###############################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). #
+# You may not use this file except in compliance with the License.
+# A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express #
+# or implied. See the License for the specific language governing permissions#
+# and limitations under the License. #
+###############################################################################
+
+import boto3
+import pytest
+from os import environ
+from moto import mock_dynamodb, mock_sns, mock_wafv2
+from moto.core import DEFAULT_ACCOUNT_ID
+from moto.sns import sns_backends
+
+
+REGION = "us-east-1"
+TABLE_NAME = "test_table"
+
+
+@pytest.fixture(scope='module', autouse=True)
+def test_aws_credentials_setup():
+ """Mocked AWS Credentials for moto"""
+ environ['AWS_ACCESS_KEY_ID'] = 'testing'
+ environ['AWS_SECRET_ACCESS_KEY'] = 'testing'
+ environ['AWS_SECURITY_TOKEN'] = 'testing'
+ environ['AWS_SESSION_TOKEN'] = 'testing'
+ environ['AWS_DEFAULT_REGION'] = 'us-east-1'
+ environ['AWS_REGION'] = 'us-east-1'
+
+
+@pytest.fixture(scope='module', autouse=True)
+def test_environment_vars_setup():
+ environ['TABLE_NAME'] = TABLE_NAME
+ environ['STACK_NAME'] = 'waf_stack'
+ environ['SNS_EMAIL'] = 'yes'
+ environ['UUID'] = "waf_test_uuid"
+ environ['SOLUTION_ID'] = "waf_test_solution_id"
+ environ['METRICS_URL'] = "https://testurl.com/generic"
+ environ['SEND_ANONYMOUS_USAGE_DATA'] = 'yes'
+
+
+@pytest.fixture(scope='module', autouse=True)
+def ddb_resource():
+ with mock_dynamodb():
+ connection = boto3.resource("dynamodb", region_name=REGION)
+ yield connection
+
+
+@pytest.fixture(scope='module', autouse=True)
+def ddb_table(ddb_resource):
+ conn = ddb_resource
+ conn.Table(TABLE_NAME)
+
+
+@pytest.fixture(scope='module', autouse=True)
+def sns_client():
+ with mock_sns():
+ connection = boto3.resource("sns", region_name=REGION)
+ yield connection
+
+
+@pytest.fixture(scope='module', autouse=True)
+def sns_topic():
+ sns_backend = sns_backends[DEFAULT_ACCOUNT_ID]["us-east-1"] # Use the appropriate account/region
+ topic_arn = sns_backend.create_topic("some_topic")
+ return topic_arn
+
+
+@pytest.fixture(scope='module', autouse=True)
+def wafv2_client():
+ with mock_wafv2():
+ connection = boto3.client("wafv2", region_name=REGION)
+ yield connection
+
+
+# with patch('botocore.client.BaseClient._make_api_call', new=mock_make_api_call):
+# client = boto3.client('s3')
+# # Should return actual result
+# o = client.get_object(Bucket='my-bucket', Key='my-key')
+# # Should return mocked exception
+# e = client.upload_part_copy()
+
+@pytest.fixture(scope='module', autouse=True)
+def set_ip_retention_test_event_setup(ddb_resource):
+ event = {
+ "detail": {
+ "userIdentity": {
+ "arn": "fake-arn"
+ },
+ "eventTime": "2023-04-27T22:33:04Z",
+ "requestParameters": {
+ "name": "fake-Whitelist-ip-set-name",
+ "scope": "CLOUDFRONT",
+ "id": "fake-ip-set-id",
+ "description": "Allow List for IPV4 addresses",
+ "addresses": [
+ "x.x.x.x/32",
+ "y.y.y.y/32",
+ "z.z.z.z/32"
+ ],
+ "lockToken": "fake-lock-token"
+ }
+ }
+ }
+ return event
+
+
+@pytest.fixture(scope='function')
+def missing_request_parameters_test_event_setup():
+ event = {
+ "detail": {
+ "userIdentity": {
+ "arn": "fake-arn"
+ },
+ "eventTime": "2023-04-27T22:33:04Z"
+ }
+ }
+ return event
\ No newline at end of file
diff --git a/source/ip_retention_handler/test/test_remove_expired_ip.py b/source/ip_retention_handler/test/test_remove_expired_ip.py
index fa97caf5..761d6f38 100644
--- a/source/ip_retention_handler/test/test_remove_expired_ip.py
+++ b/source/ip_retention_handler/test/test_remove_expired_ip.py
@@ -1,25 +1,28 @@
-##############################################################################
-# Copyright Amazon.com, Inc. and its affiliates. All Rights Reserved.
-# #
-# Licensed under the Amazon Software License (the "License"). You may not #
-# use this file except in compliance with the License. A copy of the #
-# License is located at #
-# #
-# http://aws.amazon.com/asl/ #
-# #
-# or in the "license" file accompanying this file. This file is distributed #
-# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, #
-# express or implied. See the License for the specific language governing #
-# permissions and limitations under the License. #
-##############################################################################
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
import logging
from decimal import Decimal
-from remove_expired_ip import RemoveExpiredIP
+from os import environ
+from remove_expired_ip import RemoveExpiredIP, lambda_handler
-event = {
+
+REMOVE_IP_LIST = ["x.x.x.x", "y.y.y.y"]
+EXPECTED_NONE_TYPE_ERROR_MESSAGE = "'NoneType' object has no attribute 'get'"
+EXPECTED_NONE_TYPE_NO_ATTRIBUTE_MESSAGE = "'NoneType' object has no attribute 'status_code'"
+EVENT = {
"Records": [{
- "eventID": "some-event-id",
+ "eventID": "fake-event-id",
"eventName": "REMOVE",
"eventVersion": "1.1",
"eventSource": "aws:dynamodb",
@@ -31,15 +34,15 @@
"N": "1628203246"
},
"IPSetId": {
- "S": "some-ips-set-id"
+ "S": "fake-ips-set-id"
}
},
"OldImage": {
"IPSetName": {
- "S": "some-ip-set-name"
+ "S": "fake-ip-set-name"
},
"CreatedByUser": {
- "S": "some-user"
+ "S": "fake-user"
},
"Scope": {
"S": "CLOUDFRONT"
@@ -48,7 +51,7 @@
"N": "1628203216"
},
"LockToken": {
- "S": "some-lock_token"
+ "S": "fake-lock_token"
},
"IPAdressList": {
"L": [{
@@ -61,10 +64,10 @@
"N": "1628203246"
},
"IPSetId": {
- "S": "some-ips-set-id"
+ "S": "fake-ips-set-id"
}
},
- "SequenceNumber": "some-sequence-number",
+ "SequenceNumber": "fake-sequence-number",
"SizeBytes": 339,
"StreamViewType": "OLD_IMAGE"
},
@@ -72,38 +75,121 @@
"principalId": "dynamodb.amazonaws.com",
"type": "Service"
},
- "eventSourceARN": "arn:aws:dynamodb:us-east-1:some-account:table/some-ddb-table/stream/2021-07-26T22:26:39.107"
+ "eventSourceARN": "arn:aws:dynamodb:us-east-1:fake-account:table/fake-ddb-table/stream/2021-07-26T22:26:39.107"
}]
}
-user_identity = {
+EVENT_NAME_NOT_REMOVE = {
+ "Records": [{
+ "eventID": "fake-event-id",
+ "eventName": "ADD",
+ "eventVersion": "1.1",
+ "eventSource": "aws:dynamodb",
+ "awsRegion": "us-east-1"
+ }]
+ }
+
+USER_IDENTITY = {
"principalId": "dynamodb.amazonaws.com",
"type": "Service"
}
+USER_IDENTITY_NOT_SERVICE = {
+ "principalId": "dynamodb.amazonaws.com",
+ "type": "Any"
+}
+
log = logging.getLogger()
log.setLevel('INFO')
-reip = RemoveExpiredIP(event, log)
+reip = RemoveExpiredIP(EVENT, log)
+
def test_is_none():
is_not_none = reip.is_none('some_value')
is_none = reip.is_none(None)
assert is_not_none == 'some_value' and is_none == 'None'
+
def test_is_ddb_stream_event():
- is_ddb_stream_event = reip.is_ddb_stream_event(user_identity)
+ is_ddb_stream_event = reip.is_ddb_stream_event(USER_IDENTITY)
assert is_ddb_stream_event == True
+
def test_deserialize_ddb_data():
- record = event['Records'][0]
+ record = EVENT['Records'][0]
ddb_ip_set = reip.is_none(record.get('dynamodb', {}).get('OldImage', {}))
desiralized_ddb_ip_set = reip.deserialize_ddb_data(ddb_ip_set)
- expected_desiralized_ddb_ip_set = {'IPSetName': 'some-ip-set-name', 'CreatedByUser': 'some-user', 'Scope': 'CLOUDFRONT', 'CreationTime': Decimal('1628203216'), 'LockToken': 'some-lock_token', 'IPAdressList': ['x.x.x.x/32', 'y.y.y.y/32'], 'ExpirationTime': Decimal('1628203246'), 'IPSetId': 'some-ips-set-id'}
+ expected_desiralized_ddb_ip_set = {'IPSetName': 'fake-ip-set-name', 'CreatedByUser': 'fake-user', 'Scope': 'CLOUDFRONT', 'CreationTime': Decimal('1628203216'), 'LockToken': 'fake-lock_token', 'IPAdressList': ['x.x.x.x/32', 'y.y.y.y/32'], 'ExpirationTime': Decimal('1628203246'), 'IPSetId': 'fake-ips-set-id'}
assert desiralized_ddb_ip_set == expected_desiralized_ddb_ip_set
+
def test_make_ip_list():
waf_ip_list = ['x.x.x.x/32', 'y.y.y.y/32']
ddb_ip_list = ['x.x.x.x/32', 'y.y.y.y/32', 'z.z.z.z/32', 'x.y.y.y/32', 'x.x.y.y/32']
keep_ip_list, remove_ip_list = reip.make_ip_list(log, waf_ip_list, ddb_ip_list)
assert keep_ip_list == []
- assert len(remove_ip_list) > 0
\ No newline at end of file
+ assert len(remove_ip_list) > 0
+
+
+def test_make_ip_list_no_removed_ips():
+ waf_ip_list = ['x.x.x.x/32', 'y.y.y.y/32']
+ ddb_ip_list = ['z.z.z.z/32', 'x.y.y.y/32', 'x.x.y.y/32']
+ keep_ip_list, remove_ip_list = reip.make_ip_list(log, waf_ip_list, ddb_ip_list)
+ assert keep_ip_list == []
+ assert len(remove_ip_list) == 0
+
+
+def test_send_notification(sns_topic):
+ topic_arn = str(sns_topic)
+ result = False
+ reip.send_notification(log, topic_arn, "fake_ip_set_name", "fake_ip_set_id", 30, "fake_lambda_name")
+ result = True
+ assert result == True
+
+
+def test_send_anonymous_usage_data_allowed_list():
+ try:
+ reip.send_anonymous_usage_data(log, REMOVE_IP_LIST, 'Whitelist')
+ except Exception as e:
+ assert str(e) == EXPECTED_NONE_TYPE_NO_ATTRIBUTE_MESSAGE
+
+
+def test_send_anonymous_usage_data_denied_list():
+ try:
+ reip.send_anonymous_usage_data(log, REMOVE_IP_LIST, 'Blacklist')
+ except Exception as e:
+ assert str(e) == EXPECTED_NONE_TYPE_NO_ATTRIBUTE_MESSAGE
+
+
+def test_send_anonymous_usage_data_other_list():
+ try:
+ reip.send_anonymous_usage_data(log, REMOVE_IP_LIST, 'Otherlist')
+ except Exception as e:
+ assert str(e) == EXPECTED_NONE_TYPE_NO_ATTRIBUTE_MESSAGE
+
+
+def test_send_anonymous_usage_data_empty_list():
+ try:
+ reip.send_anonymous_usage_data(log, [], 'Otherlist')
+ except Exception as e:
+ assert str(e) == EXPECTED_NONE_TYPE_NO_ATTRIBUTE_MESSAGE
+
+
+def test_no_send_anonymous_usage_data():
+ environ['SEND_ANONYMOUS_USAGE_DATA'] = 'no'
+ result = reip.send_anonymous_usage_data(log, [], 'Otherlist')
+ result is not None
+
+
+def test_none_ip_set():
+ environ['SEND_ANONYMOUS_USAGE_DATA'] = 'no'
+ result = reip.get_ip_set(log, None, 'fake-ip-set-name', 'fake-ip-set-id')
+ result is None
+
+
+def test_remove_expired_ip():
+ try:
+ lambda_handler(EVENT, {})
+ except Exception as e:
+ assert str(e) == EXPECTED_NONE_TYPE_ERROR_MESSAGE
+
diff --git a/source/ip_retention_handler/test/test_set_ip_retention.py b/source/ip_retention_handler/test/test_set_ip_retention.py
index 175c74d1..cad502b7 100644
--- a/source/ip_retention_handler/test/test_set_ip_retention.py
+++ b/source/ip_retention_handler/test/test_set_ip_retention.py
@@ -1,92 +1,66 @@
-##############################################################################
-# Copyright Amazon.com, Inc. and its affiliates. All Rights Reserved.
-# #
-# Licensed under the Amazon Software License (the "License"). You may not #
-# use this file except in compliance with the License. A copy of the #
-# License is located at #
-# #
-# http://aws.amazon.com/asl/ #
-# #
-# or in the "license" file accompanying this file. This file is distributed #
-# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, #
-# express or implied. See the License for the specific language governing #
-# permissions and limitations under the License. #
-##############################################################################
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
-import logging
-import os
-from set_ip_retention import SetIPRetention
+from os import environ
+from set_ip_retention import lambda_handler
-event ={
- "eventVersion": "1.08",
- "userIdentity": {
- "type": "AssumedRole",
- "principalId": "some-id",
- "arn": "some-arn",
- "accountId": "some-account",
- "accessKeyId": "some-key-id",
- "sessionContext": {
- "sessionIssuer": {
- "type": "Role",
- "principalId": "some-id",
- "arn": "some-arn",
- "accountId": "some-account",
- "userName": "some-username"
- },
- "webIdFederationData": {},
- "attributes": {
- "creationDate": "2021-07-26T17:42:52Z",
- "mfaAuthenticated": "false"
- }
- }
- },
- "eventTime": "2021-07-26T22:33:04Z",
- "eventSource": "wafv2.amazonaws.com",
- "eventName": "UpdateIPSet",
- "awsRegion": "us-east-1",
- "sourceIPAddress": "some-ip",
- "userAgent": "aws-internal/3 aws-sdk-java/1.11.1004 Linux/5.4.116-64.217.amzn2int.x86_64 OpenJDK_64-Bit_Server_VM/25.292-b10 java/1.8.0_292 vendor/Oracle_Corporation cfg/retry-mode/legacy",
- "requestParameters": {
- "name": "some-Whitelist-ip-set-name",
- "scope": "CLOUDFRONT",
- "id": "some-ip-set-id",
- "description": "Allow List for IPV4 addresses",
- "addresses": [
- "x.x.x.x/32",
- "y.y.y.y/32",
- "z.z.z.z/32"
- ],
- "lockToken": "some-lock-token"
- },
- "responseElements": {
- "nextLockToken": "some-next-lock-token"
- },
- "requestID": "some-request-id",
- "eventID": "some-event-id",
- "readOnly": 'false',
- "eventType": "AwsApiCall",
- "apiVersion": "2019-04-23",
- "managementEvent": 'true',
- "recipientAccountId": "some-account",
- "eventCategory": "Management"
- }
-log = logging.getLogger()
-log.setLevel('INFO')
-sipr = SetIPRetention(event, log)
+SKIP_PROCESS_MESSAGE = "The event for UpdateIPSet API call was made by RemoveExpiredIP lambda instead of user. Skip."
-os.environ["TABLE_NAME"] = 'test_table'
-os.environ['IP_RETENTION_PEROID_ALLOWED_MINUTE'] = '5'
-os.environ['STACK_NAME'] = 'waf-solution'
-def test_get_expiration_time():
- epoch_time = sipr.get_expiration_time("2021-07-26T22:33:04Z", 5)
- assert epoch_time == 1627339084
+def test_set_ip_retention(set_ip_retention_test_event_setup):
+ environ['REMOVE_EXPIRED_IP_LAMBDA_ROLE_NAME'] = 'some_role'
+ environ['IP_RETENTION_PERIOD_ALLOWED_MINUTE'] = '60'
+ environ['IP_RETENTION_PERIOD_DENIED_MINUTE'] = '60'
+ environ['TABLE_NAME'] = "test_table"
+ event = set_ip_retention_test_event_setup
+ result = lambda_handler(event, {})
+ assert result is None
-def test_make_item():
- item = sipr.make_item(event)
- # Remove CreationTime as it is current timestamp that constantly changes
- del item['CreationTime']
+def test_ip_retention_not_activated(set_ip_retention_test_event_setup):
+ environ['REMOVE_EXPIRED_IP_LAMBDA_ROLE_NAME'] = 'some_role'
+ environ['IP_RETENTION_PERIOD_ALLOWED_MINUTE'] = '-1'
+ environ['IP_RETENTION_PERIOD_DENIED_MINUTE'] = '-1'
+ event = set_ip_retention_test_event_setup
+ result = lambda_handler(event, {})
+ assert result is not None
- assert item == {'IPSetId': 'some-ip-set-id', 'IPSetName': 'some-Whitelist-ip-set-name', 'Scope': 'CLOUDFRONT', 'IPAdressList': ['x.x.x.x/32', 'y.y.y.y/32', 'z.z.z.z/32'], 'LockToken': 'some-lock-token', 'IPRetentionPeriodMinute': 15, 'ExpirationTime': 1627339684, 'CreatedByUser': 'waf-solution'}
\ No newline at end of file
+def test_missing_request_parameters_in_event(missing_request_parameters_test_event_setup):
+ environ['REMOVE_EXPIRED_IP_LAMBDA_ROLE_NAME'] = 'some_role'
+ environ['IP_RETENTION_PERIOD_ALLOWED_MINUTE'] = '60'
+ environ['IP_RETENTION_PERIOD_DENIED_MINUTE'] = '60'
+ event = missing_request_parameters_test_event_setup
+ result = lambda_handler(event, {})
+ assert result is None
+
+
+def test_skip_process(set_ip_retention_test_event_setup):
+ environ['REMOVE_EXPIRED_IP_LAMBDA_ROLE_NAME'] = 'fake-arn'
+ event = set_ip_retention_test_event_setup
+ result = {"Message": SKIP_PROCESS_MESSAGE}
+ assert result == lambda_handler(event, {})
+
+
+def test_put_item_exception(set_ip_retention_test_event_setup):
+ try:
+ environ['REMOVE_EXPIRED_IP_LAMBDA_ROLE_NAME'] = 'some_role'
+ environ['IP_RETENTION_PERIOD_ALLOWED_MINUTE'] = '-1'
+ environ['IP_RETENTION_PERIOD_DENIED_MINUTE'] = '60'
+ environ.pop('TABLE_NAME')
+ event = set_ip_retention_test_event_setup
+ result = False
+ lambda_handler(event, {})
+ result = True
+ except Exception as e:
+ assert result == False
\ No newline at end of file
diff --git a/source/ip_retention_handler/testing_requirements.txt b/source/ip_retention_handler/testing_requirements.txt
deleted file mode 100644
index 7e3aaf95..00000000
--- a/source/ip_retention_handler/testing_requirements.txt
+++ /dev/null
@@ -1,12 +0,0 @@
-botocore>=1.12.99
-boto3>=1.9.99
-mock>=5.0.0
-moto>=4.0.13
-pytest>=7.2.0
-pytest-mock>=3.10.0
-pytest-runner>=6.0.0
-uuid>=1.30
-backoff>=2.2.1
-freezegun>=1.2.2
-pytest-cov
-pytest-env
\ No newline at end of file
diff --git a/source/lib/sns_util.py b/source/lib/sns_util.py
index 47617704..be7104d7 100644
--- a/source/lib/sns_util.py
+++ b/source/lib/sns_util.py
@@ -12,9 +12,6 @@
######################################################################################################################
#!/bin/python
-import boto3
-from os import environ
-from botocore.config import Config
from lib.boto3_util import create_client
class SNS(object):
@@ -31,6 +28,6 @@ def publish(self, topic_arn, message, subject):
)
return response
except Exception as e:
- self.log.error("[sns_util: publish] failed to send email notificaion: \nTopic Arn: %s\nMessage: %s", topic_arn, message)
+ self.log.error("[sns_util: publish] failed to send email notification: \nTopic Arn: %s\nMessage: %s", topic_arn, message)
self.log.error(e)
return None
diff --git a/source/lib/solution_metrics.py b/source/lib/solution_metrics.py
index 7d0dc475..b5c05f2b 100644
--- a/source/lib/solution_metrics.py
+++ b/source/lib/solution_metrics.py
@@ -1,5 +1,5 @@
###############################################################################
-# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
# #
# Licensed under the Apache License, Version 2.0 (the "License"). #
# You may not use this file except in compliance with the License.
@@ -46,7 +46,7 @@ def send_metrics(data,
}
json_data = dumps(metrics_data)
headers = {'content-type': 'application/json'}
- response = requests.post(url, data=json_data, headers=headers, timeout=300)
+ response = requests.post(url, data=json_data, headers=headers, timeout=10)
return response
except Exception as e:
log.error("[solution_metrics:send_metrics] Failed to send solution metrics.")
diff --git a/source/lib/waflibv2.py b/source/lib/waflibv2.py
index 5d443395..41a5ff38 100644
--- a/source/lib/waflibv2.py
+++ b/source/lib/waflibv2.py
@@ -1,5 +1,5 @@
######################################################################################################################
-# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
# #
# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
# with the License. A copy of the License is located at #
@@ -97,17 +97,19 @@ def get_ip_set(self, log, scope, name, arn):
log.error(str(e))
return None
- # Retrieve addresses based on ip_set_id
+ # Get the count of ip addresses based on ip set arn
@on_exception(expo, client.exceptions.WAFInternalErrorException, max_time=MAX_TIME)
- def get_addresses(self, log, scope, name, arn):
+ def get_ip_address_count(self, log, scope, name, arn):
try:
response = self.get_ip_set(log, scope, name, arn)
- addresses = response["IPSet"]["Addresses"]
- return addresses
+ log.info(response)
+ ip_count = len(response['IPSet']['Addresses']) if response is not None else 0
+ log.info("%s IP address count: %s" %(name, str(ip_count)))
+ return ip_count
except Exception as e:
- log.error("Failed to get addresses for ARN %s", str(arn))
+ log.error("Failed to get the count of IP address for ARN %s", str(arn))
log.error(str(e))
- return None
+ return 0
# Update addresses in an IPSet using ip set id
@on_exception(expo, client.exceptions.WAFOptimisticLockException,
@@ -236,35 +238,6 @@ def list_web_acls(self, log, scope):
log.error("Failed to list WebAcld in scope: %s", str(scope))
log.error(str(e))
return None
-
- # log when retry is stopped
- # def give_up_retry(self, log, e):
- # log.error("Giving up retry after %s times.",str(API_CALL_NUM_RETRIES))
- # log.error(e)
-
- #################################################################
- # Following functions only used for testing, not in WAF Solution
- #################################################################
-
- @on_exception(expo,
- (client.exceptions.WAFInternalErrorException,
- client.exceptions.WAFOptimisticLockException,
- client.exceptions.WAFLimitsExceededException),
- max_time=MAX_TIME)
- def create_ip_set(self, log, scope, name, description, version, addresses):
- try:
- response = client.create_ip_set(
- Scope=scope,
- Name=name,
- Description=description,
- IPAddressVersion=version,
- Addresses=addresses
- )
- return response
- except Exception as e:
- log.error("Failed to create IPSet: %s", str(name))
- log.error(str(e))
- return None
@on_exception(expo,
(client.exceptions.WAFInternalErrorException,
@@ -286,25 +259,4 @@ def delete_ip_set(self, log, scope, name, ip_set_id):
except Exception as e:
log.error("Failed to delete IPSet: %s", str(name))
log.error(str(e))
- return None
-
- @on_exception(expo, client.exceptions.WAFInternalErrorException, max_time=MAX_TIME)
- def list_ip_sets(self, log, scope, marker=None):
- try:
- response = None
- if marker == None:
- response = client.list_ip_sets(
- Scope=scope,
- Limit=50
- )
- else:
- response = client.list_ip_sets(
- Scope=scope,
- NextMarker=marker,
- Limit=50
- )
- return response
- except Exception as e:
- log.error("Failed to list IPSets in scope: %s", str(scope))
- log.error(str(e))
return None
\ No newline at end of file
diff --git a/source/log_parser/.coveragerc b/source/log_parser/.coveragerc
new file mode 100644
index 00000000..3aa79036
--- /dev/null
+++ b/source/log_parser/.coveragerc
@@ -0,0 +1,29 @@
+[run]
+omit =
+ test/*
+ */__init__.py
+ **/__init__.py
+ backoff/*
+ bin/*
+ boto3/*
+ botocore/*
+ certifi/*
+ charset*/*
+ crhelper*
+ chardet*
+ dateutil/*
+ idna/*
+ jmespath/*
+ lib/*
+ package*
+ python_*
+ requests/*
+ s3transfer/*
+ six*
+ tenacity*
+ tests
+ urllib3/*
+ yaml
+ PyYAML-*
+source =
+ .
\ No newline at end of file
diff --git a/source/log_parser/add_athena_partitions.py b/source/log_parser/add_athena_partitions.py
index ac65c55a..3aab4a30 100644
--- a/source/log_parser/add_athena_partitions.py
+++ b/source/log_parser/add_athena_partitions.py
@@ -1,5 +1,5 @@
##############################################################################
-# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
# #
# Licensed under the Apache License, Version 2.0 (the "License"). #
# You may not use this file except in compliance #
@@ -14,29 +14,17 @@
##############################################################################
import datetime
-import boto3
-import re
-import logging
-from os import environ
-from botocore.config import Config
from lib.boto3_util import create_client
+from lib.logging_util import set_log_level
-def lambda_handler(event, context):
+def lambda_handler(event, _):
"""
This function adds a new hourly partition to athena table.
It runs every hour, triggered by a CloudWatch event rule.
"""
- log = logging.getLogger()
+ log = set_log_level()
log.debug('[add-athena-partition lambda_handler] Start')
try:
- # ---------------------------------------------------------
- # Set Log Level
- # ---------------------------------------------------------
- log_level = str(environ['LOG_LEVEL'].upper())
- if log_level not in ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']:
- log_level = 'ERROR'
- log.setLevel(log_level)
-
# ----------------------------------------------------------
# Process event
# ----------------------------------------------------------
diff --git a/source/log_parser/athena_log_parser.py b/source/log_parser/athena_log_parser.py
new file mode 100644
index 00000000..20a5d7ff
--- /dev/null
+++ b/source/log_parser/athena_log_parser.py
@@ -0,0 +1,150 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
+import csv
+import datetime
+from os import environ, remove
+from build_athena_queries import build_athena_query_for_app_access_logs, \
+ build_athena_query_for_waf_logs
+from lib.boto3_util import create_client
+from lib.s3_util import S3
+from lambda_log_parser import LambdaLogParser
+
+
+class AthenaLogParser(object):
+ """
+ This class includes functions to process WAF and App access logs using Athena parser
+ """
+
+ def __init__(self, log):
+ self.log = log
+ self.s3_util = S3(log)
+ self.lambda_log_parser = LambdaLogParser(log)
+
+
+ def process_athena_scheduler_event(self, event):
+ self.log.debug("[athena_log_parser: process_athena_scheduler_event] Start")
+
+ log_type = str(environ['LOG_TYPE'].upper())
+
+ # Execute athena query for CloudFront or ALB logs
+ if event['resourceType'] == 'LambdaAthenaAppLogParser' \
+ and (log_type == 'CLOUDFRONT' or log_type == 'ALB'):
+ self.execute_athena_query(log_type, event)
+
+ # Execute athena query for WAF logs
+ if event['resourceType'] == 'LambdaAthenaWAFLogParser':
+ self.execute_athena_query('WAF', event)
+
+ self.log.debug("[athena_log_parser: process_athena_scheduler_event] End")
+
+
+ def execute_athena_query(self, log_type, event):
+ self.log.debug("[athena_log_parser: execute_athena_query] Start")
+
+ athena_client = create_client('athena')
+ s3_output = "s3://%s/athena_results/" % event['accessLogBucket']
+ database_name = event['glueAccessLogsDatabase']
+
+ # Dynamically build query string using partition
+ # for CloudFront or ALB logs
+ if log_type == 'CLOUDFRONT' or log_type == 'ALB':
+ query_string = build_athena_query_for_app_access_logs(
+ self.log,
+ log_type,
+ event['glueAccessLogsDatabase'],
+ event['glueAppAccessLogsTable'],
+ datetime.datetime.utcnow(),
+ int(environ['WAF_BLOCK_PERIOD']),
+ int(environ['ERROR_THRESHOLD'])
+ )
+ else: # Dynamically build query string using partition for WAF logs
+ query_string = build_athena_query_for_waf_logs(
+ self.log,
+ event['glueAccessLogsDatabase'],
+ event['glueWafAccessLogsTable'],
+ datetime.datetime.utcnow(),
+ int(environ['WAF_BLOCK_PERIOD']),
+ int(environ['REQUEST_THRESHOLD']),
+ environ['REQUEST_THRESHOLD_BY_COUNTRY'],
+ environ['HTTP_FLOOD_ATHENA_GROUP_BY'],
+ int(environ['ATHENA_QUERY_RUN_SCHEDULE'])
+ )
+
+ response = athena_client.start_query_execution(
+ QueryString=query_string,
+ QueryExecutionContext={'Database': database_name},
+ ResultConfiguration={
+ 'OutputLocation': s3_output,
+ 'EncryptionConfiguration': {
+ 'EncryptionOption': 'SSE_S3'
+ }
+ },
+ WorkGroup=event['athenaWorkGroup']
+ )
+
+ self.log.info("[athena_log_parser: execute_athena_query] Query Execution Response: {}".format(response))
+ self.log.info("[athena_log_parser: execute_athena_query] End")
+
+
+ def read_athena_result_file(self, local_file_path):
+ self.log.debug("[athena_log_parser: read_athena_result_file] Start")
+
+ outstanding_requesters = {
+ 'general': {},
+ 'uriList': {}
+ }
+ utc_now_timestamp_str = datetime.datetime.now(datetime.timezone.utc).strftime("%Y-%m-%d %H:%M:%S %Z%z")
+ with open(local_file_path, 'r') as csvfile:
+ reader = csv.DictReader(csvfile)
+ for row in reader:
+ # max_counter_per_min is set as 1 just to reuse lambda log parser data structure
+ # and reuse update_ip_set.
+ outstanding_requesters['general'][row['client_ip']] = {
+ "max_counter_per_min": row['max_counter_per_min'],
+ "updated_at": utc_now_timestamp_str
+ }
+ remove(local_file_path)
+
+ self.log.debug("[athena_log_parser: read_athena_result_file] local_file_path: %s",
+ local_file_path)
+ self.log.debug("[athena_log_parser: read_athena_result_file] End")
+
+ return outstanding_requesters
+
+
+ def process_athena_result(self, bucket_name, key_name, ip_set_type):
+ self.log.debug("[athena_log_parser: process_athena_result] Start")
+
+ try:
+ # --------------------------------------------------------------------------------------------------------------
+ self.log.info("[athena_log_parser: process_athena_result] Download file from S3")
+ # --------------------------------------------------------------------------------------------------------------
+ local_file_path = '/tmp/' + key_name.split('/')[-1]
+ self.s3_util.download_file_from_s3(bucket_name, key_name, local_file_path)
+
+ # --------------------------------------------------------------------------------------------------------------
+ self.log.info("[athena_log_parser: process_athena_result] Read file content")
+ # --------------------------------------------------------------------------------------------------------------
+ outstanding_requesters = self.read_athena_result_file(local_file_path)
+
+ # --------------------------------------------------------------------------------------------------------------
+ self.log.info("[athena_log_parser: process_athena_result] Update WAF IP Sets")
+ # --------------------------------------------------------------------------------------------------------------
+ self.lambda_log_parser.update_ip_set(ip_set_type, outstanding_requesters)
+
+ except Exception as e:
+ self.log.error("[athena_log_parser: process_athena_result] Error to read input file")
+ self.log.error(e)
+
+ self.log.debug("[athena_log_parser: process_athena_result] End")
diff --git a/source/log_parser/build_athena_queries.py b/source/log_parser/build_athena_queries.py
index 7fe52537..b4455ff8 100644
--- a/source/log_parser/build_athena_queries.py
+++ b/source/log_parser/build_athena_queries.py
@@ -1,5 +1,5 @@
##############################################################################
-# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
# #
# Licensed under the Apache License, Version 2.0 (the "License"). #
# You may not use this file except in compliance #
@@ -14,6 +14,7 @@
##############################################################################
import datetime
+import json
def build_athena_query_for_app_access_logs(
@@ -69,7 +70,7 @@ def build_athena_query_for_app_access_logs(
log, start_timestamp, end_timestamp)
query_string = query_string + \
build_athena_query_part_three_for_app_access_logs(
- log, error_threshold, start_timestamp, end_timestamp)
+ log, error_threshold, start_timestamp)
log.info(
"[build_athena_query_for_app_access_logs] \
@@ -83,7 +84,9 @@ def build_athena_query_for_app_access_logs(
def build_athena_query_for_waf_logs(
log, database_name, table_name, end_timestamp,
- waf_block_period, request_threshold):
+ waf_block_period, request_threshold,
+ request_threshold_by_country,
+ group_by, athena_query_run_schedule):
"""
This function dynamically builds athena query
for cloudfront logs by adding partition values:
@@ -98,6 +101,8 @@ def build_athena_query_for_waf_logs(
end_timestamp: datetime. The end time stamp of the logs being scanned
waf_block_period: int. The period (in minutes) to block applicable IP addresses
request_threshold: int. The maximum acceptable bad requests per minute per IP address
+ request_threshold_by_country: The maximum acceptable bad requests per minute per Country
+ athena_query_run_schedule: The Athena query run schedule (in minutes) set in EventBridge events rule
Returns:
Athena query string
@@ -123,14 +128,21 @@ def build_athena_query_for_waf_logs(
"[build_athena_query_for_waf_logs] \
Build query")
# --------------------------------------------------
+ additional_columns_group_one, additional_columns_group_two \
+ = build_select_group_by_columns_for_waf_logs(
+ log, group_by, request_threshold_by_country)
query_string = build_athena_query_part_one_for_waf_logs(
- log, database_name, table_name)
+ log, database_name, table_name,
+ additional_columns_group_one,
+ additional_columns_group_two)
query_string = query_string + \
build_athena_query_part_two_for_partition(
log, start_timestamp, end_timestamp)
query_string = query_string + \
build_athena_query_part_three_for_waf_logs(
- log, request_threshold, start_timestamp, end_timestamp)
+ log, request_threshold, request_threshold_by_country,
+ athena_query_run_schedule, additional_columns_group_two,
+ start_timestamp)
log.info(
"[build_athena_query_for_waf_logs] \
@@ -206,8 +218,52 @@ def build_athena_query_part_one_for_alb_logs(
return query_string
+def build_select_group_by_columns_for_waf_logs(
+ log, group_by, request_threshold_by_country):
+ """
+ This function dynamically builds user selected additional columns
+ in select and group by statement of the athena query.
+
+ Args:
+ log: logging object
+ group_by: string. The group by columns (country, uri or both) selected by user
+
+ Returns:
+ string of columns
+ """
+
+ additional_columns_group_one = ''
+ additional_columns_group_two = ''
+
+ if group_by.lower() == 'country' or \
+ (group_by.lower() == 'none' and len(request_threshold_by_country) > 0) :
+ additional_columns_group_one = 'httprequest.country as country,'
+ additional_columns_group_two = ', country'
+ elif group_by.lower() == 'uri':
+ # Add country if threshold by country is configured
+ additional_columns_group_one = \
+ 'httprequest.uri as uri,' \
+ if len(request_threshold_by_country) == 0 \
+ else 'httprequest.country as country, httprequest.uri as uri,'
+ additional_columns_group_two = \
+ ', uri' \
+ if len(request_threshold_by_country) == 0 \
+ else ', country, uri'
+ elif group_by.lower() == 'country and uri':
+ additional_columns_group_one = 'httprequest.country as country, httprequest.uri as uri,'
+ additional_columns_group_two = ', country, uri'
+
+ log.debug(
+ "[build_select_group_by_columns_for_waf_logs] \
+ Additional columns group one: %s\nAdditional columns group two: %s"
+ %(additional_columns_group_one, additional_columns_group_two))
+ return additional_columns_group_one, additional_columns_group_two
+
+
def build_athena_query_part_one_for_waf_logs(
- log, database_name, table_name):
+ log, database_name, table_name,
+ additional_columns_group_one,
+ additional_columns_group_two):
"""
This function dynamically builds the first part
of the athena query.
@@ -221,12 +277,12 @@ def build_athena_query_part_one_for_waf_logs(
Athena query string
"""
query_string = "SELECT\n" \
- "\tclient_ip,\n" \
+ "\tclient_ip" + additional_columns_group_two + ",\n" \
"\tMAX_BY(counter, counter) as max_counter_per_min\n" \
" FROM (\n" \
"\tWITH logs_with_concat_data AS (\n" \
"\t\tSELECT\n" \
- "\t\t\thttprequest.clientip as client_ip,\n" \
+ "\t\t\thttprequest.clientip as client_ip," + additional_columns_group_one + "\n" \
"\t\t\tfrom_unixtime(timestamp/1000) as datetime\n" \
"\t\tFROM\n" \
+ "\t\t\t" \
@@ -310,7 +366,7 @@ def build_athena_query_part_two_for_partition(
def build_athena_query_part_three_for_app_access_logs(
- log, error_threshold, start_timestamp, end_timestamp):
+ log, error_threshold, start_timestamp):
"""
This function dynamically builds the third part
of the athena query.
@@ -319,7 +375,6 @@ def build_athena_query_part_three_for_app_access_logs(
log: logging object
error_threshold: int. The maximum acceptable bad requests per minute per IP address
start_timestamp: datetime. The start time stamp of the logs being scanned
- end_timestamp: datetime. The end time stamp of the logs being scanned
Returns:
Athena query string
@@ -351,25 +406,73 @@ def build_athena_query_part_three_for_app_access_logs(
return query_string
+def build_having_clause_for_waf_logs(
+ log, default_request_threshold,
+ request_threshold_by_country,
+ athena_query_run_schedule):
+ """
+ This function dynamically builds having clause of the athena query.
+
+ Args:
+ log: logging object
+ group_by: json string. request thresholds for countries configured by user
+
+ Returns:
+ string of having clause
+ """
+ request_threshold_calculated = default_request_threshold / athena_query_run_schedule
+
+ having_clause_string = "\t\tCOUNT(*) >= " + str(request_threshold_calculated)
+
+ if len(request_threshold_by_country) > 0 :
+ having_clause_string = ''
+ not_in_country_string = ''
+
+ request_threshold_by_country_json = json.loads(request_threshold_by_country)
+ for country in request_threshold_by_country_json:
+ request_threshold_for_country = request_threshold_by_country_json[country]
+ request_threshold_for_country_calculated = request_threshold_for_country / athena_query_run_schedule
+ request_threshold_for_country_string = "\t\t(COUNT(*) >= " + str(request_threshold_for_country_calculated) + " AND country = '" + country + "') OR \n"
+ having_clause_string += request_threshold_for_country_string
+ not_in_country_string += "'" + country + "',"
+
+ # Remove last comma and add closing parentheses
+ not_in_country_string = not_in_country_string[:-1] + "))"
+ not_in_country_prefix = "\t\t(COUNT(*) >= " + str(request_threshold_calculated) + " AND country NOT IN ("
+ request_threshold_for_others_string = not_in_country_prefix + not_in_country_string
+ having_clause_string = having_clause_string + request_threshold_for_others_string
+
+ log.debug(
+ "[build_select_group_by_columns_for_waf_logs] \
+ Having clause: %s"%having_clause_string)
+ return having_clause_string
+
+
def build_athena_query_part_three_for_waf_logs(
- log, request_threshold, start_timestamp, end_timestamp):
+ log, default_request_threshold, request_threshold_by_country,
+ athena_query_run_schedule, additional_columns_group_two,
+ start_timestamp):
"""
This function dynamically builds the third part
of the athena query.
Args:
log: logging object
- error_threshold: int. The maximum acceptable bad requests per minute per IP address
+ request_threshold: int. The maximum acceptable count of requests per IP address within the scheduled query run interval (default 5 minutes)
start_timestamp: datetime. The start time stamp of the logs being scanned
- end_timestamp: datetime. The end time stamp of the logs being scanned
+ request_threshold_by_country: json string. The maximum acceptable count of requests per IP address per specified country within the scheduled query run interval (default 5 minutes)
+ athena_query_run_schedule: int. The Athena query run schedule (in minutes) set in EventBridge events rule
Returns:
Athena query string
"""
- request_threshold_calculated = request_threshold / 5
+ having_clause = build_having_clause_for_waf_logs(
+ log, default_request_threshold, request_threshold_by_country,
+ athena_query_run_schedule)
+
query_string = "\n\t)\n" \
"\tSELECT\n" \
- "\t\tclient_ip,\n" \
+ "\t\tclient_ip" + additional_columns_group_two + ",\n" \
"\t\tCOUNT(*) as counter\n" \
"\tFROM\n" \
"\t\tlogs_with_concat_data\n" \
@@ -377,17 +480,16 @@ def build_athena_query_part_three_for_waf_logs(
"\t\tdatetime > TIMESTAMP " \
+ "'" + str(start_timestamp)[0:19] + "'"\
"\n\tGROUP BY\n" \
- "\t\tclient_ip,\n" \
+ "\t\tclient_ip" + additional_columns_group_two + ",\n" \
"\t\tdate_trunc('minute', datetime)\n" \
"\tHAVING\n" \
- "\t\tCOUNT(*) >= " \
- + str(request_threshold_calculated) + \
+ + having_clause + \
"\n) GROUP BY\n" \
- "\tclient_ip\n" \
+ "\tclient_ip" + additional_columns_group_two + "\n" \
"ORDER BY\n" \
"\tmax_counter_per_min DESC\n" \
"LIMIT 10000;"
log.debug(
"[build_athena_query_part_three_for_waf_logs] \
Query string part Three:\n %s"%query_string)
- return query_string
+ return query_string
\ No newline at end of file
diff --git a/source/log_parser/lambda_log_parser.py b/source/log_parser/lambda_log_parser.py
new file mode 100644
index 00000000..dbc88c7d
--- /dev/null
+++ b/source/log_parser/lambda_log_parser.py
@@ -0,0 +1,632 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
+import gzip
+import json
+import datetime
+import os
+from os import remove
+from time import sleep
+from urllib.parse import urlparse
+from lib.waflibv2 import WAFLIBv2
+from lib.s3_util import S3
+
+TMP_DIR = '/tmp/'
+FORMAT_DATE_TIME = "%Y-%m-%d %H:%M:%S %Z%z"
+
+class LambdaLogParser(object):
+ """
+ This class includes functions to process WAF and App access logs using Lambda parser
+ """
+
+ def __init__(self, log):
+ self.log = log
+ self.config = {}
+ self.delay_between_updates = 5
+ self.scope = os.getenv('SCOPE')
+ self.scanners = 1
+ self.flood = 2
+ self.s3_util = S3(log)
+ self.waflib = WAFLIBv2()
+
+ # CloudFront Access Logs
+ # http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html#BasicDistributionFileFormat
+ self.line_format_cloud_front = {
+ 'delimiter': '\t',
+ 'date': 0,
+ 'time': 1,
+ 'source_ip': 4,
+ 'uri': 7,
+ 'code': 8
+ }
+
+ # ALB Access Logs
+ # http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html
+ self.line_format_alb = {
+ 'delimiter': ' ',
+ 'timestamp': 1,
+ 'source_ip': 3,
+ 'code': 9, # GitHub issue #44. Changed from elb_status_code to target_status_code.
+ 'uri': 13
+ }
+
+
+ def read_waf_log_file(self, line):
+ line = line.decode() # Remove the b in front of each field
+ line_data = json.loads(str(line))
+
+ request_key = datetime.datetime.fromtimestamp(int(line_data['timestamp']) / 1000.0).isoformat(
+ sep='T', timespec='minutes')
+ request_key += ' ' + line_data['httpRequest']['clientIp']
+ uri = urlparse(line_data['httpRequest']['uri']).path
+
+ return request_key, uri, line_data
+
+
+ def read_alb_log_file(self, line):
+ line_data = line.split(self.line_format_alb['delimiter'])
+ request_key = line_data[self.line_format_alb['timestamp']].rsplit(':', 1)[0]
+ request_key += ' ' + line_data[self.line_format_alb['source_ip']].rsplit(':', 1)[0]
+ return_code_index = self.line_format_alb['code']
+ uri = urlparse(line_data[self.line_format_alb['uri']]).path
+
+ return request_key, uri, return_code_index, line_data
+
+
+ def read_cloudfront_log_file(self, line):
+ line_data = line.split(self.line_format_cloud_front['delimiter'])
+ request_key = line_data[self.line_format_cloud_front['date']]
+ request_key += ' ' + line_data[self.line_format_cloud_front['time']][:-3]
+ request_key += ' ' + line_data[self.line_format_cloud_front['source_ip']]
+ return_code_index = self.line_format_cloud_front['code']
+ uri = urlparse(line_data[self.line_format_cloud_front['uri']]).path
+
+ return request_key, uri, return_code_index, line_data
+
+
+ def update_threshold_counter(self, request_key, uri, return_code_index, line_data, counter):
+ if return_code_index == None or line_data[return_code_index] in self.config['general']['errorCodes']:
+ counter['general'][request_key] = counter['general'][request_key] + 1 \
+ if request_key in counter['general'].keys() else 1
+
+ if 'uriList' in self.config and uri in self.config['uriList'].keys():
+ if uri not in counter['uriList'].keys():
+ counter['uriList'][uri] = {}
+
+ counter['uriList'][uri][request_key] = counter['uriList'][uri][request_key] + 1 \
+ if request_key in counter['uriList'][uri].keys() else 1
+
+ return counter
+
+
+ def read_log_file(self, local_file_path, log_type, error_count):
+ counter = {
+ 'general': {},
+ 'uriList': {}
+ }
+ outstanding_requesters = {
+ 'general': {},
+ 'uriList': {}
+ }
+
+ with gzip.open(local_file_path, 'r') as content:
+ for line in content:
+ try:
+ request_key = ""
+ uri = ""
+ return_code_index = None
+
+ if log_type == 'waf':
+ request_key, uri, line_data = self.read_waf_log_file(line)
+ elif log_type == 'alb':
+ line = line.decode('utf8')
+ if line.startswith('#'):
+ continue
+ request_key, uri, return_code_index, line_data = \
+ self.read_alb_log_file(line)
+ elif log_type == 'cloudfront':
+ line = line.decode('utf8')
+ if line.startswith('#'):
+ continue
+ request_key, uri, return_code_index, line_data = \
+ self.read_cloudfront_log_file(line)
+ else:
+ return outstanding_requesters
+
+ if 'ignoredSufixes' in self.config['general'] and uri.endswith(
+ tuple(self.config['general']['ignoredSufixes'])):
+ self.log.debug(
+ "[lambda_log_parser: get_outstanding_requesters] Skipping line %s. Included in ignoredSufixes." % line)
+ continue
+
+ counter = self.update_threshold_counter(request_key, uri, return_code_index, line_data, counter)
+
+ except Exception as e:
+ error_count += 1
+ self.log.error("[lambda_log_parser: get_outstanding_requesters] Error to process line: %s" % line)
+ self.log.error(str(e))
+ if error_count == 5: #Allow 5 errors before stopping the function execution
+ raise
+ remove(local_file_path)
+ return counter, outstanding_requesters
+
+
+ def parse_log_file(self, bucket_name, key_name, log_type):
+ self.log.debug("[lambda_log_parser: parse_log_file] Start")
+
+ # --------------------------------------------------------------------------------------------------------------
+ self.log.info("[lambda_log_parser: parse_log_file] Download file from S3")
+ # --------------------------------------------------------------------------------------------------------------
+ local_file_path = TMP_DIR + key_name.split('/')[-1]
+ self.s3_util.download_file_from_s3(bucket_name, key_name, local_file_path)
+
+ # --------------------------------------------------------------------------------------------------------------
+ self.log.info("[lambda_log_parser: parse_log_file] Read file content")
+ # --------------------------------------------------------------------------------------------------------------
+ error_count = 0
+ counter, outstanding_requesters = self.read_log_file(local_file_path, log_type, error_count)
+
+ return counter, outstanding_requesters
+
+
+ def get_general_outstanding_requesters(self, counter, outstanding_requesters,
+ threshold, utc_now_timestamp_str):
+ for k, num_reqs in counter['general'].items():
+ try:
+ k = k.split(' ')[-1]
+ if num_reqs >= self.config['general'][threshold]:
+ if k not in outstanding_requesters['general'].keys() or num_reqs > \
+ outstanding_requesters['general'][k]['max_counter_per_min']:
+ outstanding_requesters['general'][k] = {
+ 'max_counter_per_min': num_reqs,
+ 'updated_at': utc_now_timestamp_str
+ }
+ except Exception:
+ self.log.error(
+ "[lambda_log_parser: get_general_outstanding_requesters] \
+ Error to process general outstanding requester: %s" % k)
+
+ return outstanding_requesters
+
+
+ def get_urilist_outstanding_requesters(self, counter, outstanding_requesters,
+ threshold, utc_now_timestamp_str):
+ for uri in counter['uriList'].keys():
+ for k, num_reqs in counter['uriList'][uri].items():
+ try:
+ k = k.split(' ')[-1]
+ if num_reqs >= self.config['uriList'][uri][threshold]:
+ if uri not in outstanding_requesters['uriList'].keys():
+ outstanding_requesters['uriList'][uri] = {}
+
+ if k not in outstanding_requesters['uriList'][uri].keys() or num_reqs > \
+ outstanding_requesters['uriList'][uri][k]['max_counter_per_min']:
+ outstanding_requesters['uriList'][uri][k] = {
+ 'max_counter_per_min': num_reqs,
+ 'updated_at': utc_now_timestamp_str
+ }
+ except Exception:
+ self.log.error(
+ "[lambda_log_parser: get_urilist_outstanding_requesters] \
+ Error to process outstanding requester:(%s) %s" % (uri, k))
+
+ return outstanding_requesters
+
+
+ def get_outstanding_requesters(self, log_type, counter, outstanding_requesters):
+ self.log.debug("[lambda_log_parser: get_outstanding_requesters] Start")
+
+ # --------------------------------------------------------------------------------------------------------------
+ self.log.info("[lambda_log_parser: get_outstanding_requesters] Keep only outstanding requesters")
+ # --------------------------------------------------------------------------------------------------------------
+ threshold = 'requestThreshold' if log_type == 'waf' else "errorThreshold"
+ utc_now_timestamp_str = datetime.datetime.now(datetime.timezone.utc).strftime(FORMAT_DATE_TIME)
+ outstanding_requesters = self.get_general_outstanding_requesters(
+ counter, outstanding_requesters,threshold, utc_now_timestamp_str)
+ outstanding_requesters = self.get_urilist_outstanding_requesters(
+ counter, outstanding_requesters, threshold, utc_now_timestamp_str)
+
+ self.log.debug("[lambda_log_parser: get_outstanding_requesters] End")
+ return outstanding_requesters
+
+
+ def calculate_last_update_age(self, response):
+ utc_last_modified = response['LastModified'].astimezone(datetime.timezone.utc)
+ utc_now_timestamp = datetime.datetime.now(datetime.timezone.utc)
+ utc_now_timestamp_str = utc_now_timestamp.strftime(FORMAT_DATE_TIME)
+ last_update_age = int(((utc_now_timestamp - utc_last_modified).total_seconds()) / 60)
+
+ return utc_now_timestamp, utc_now_timestamp_str, last_update_age
+
+
+ def get_current_blocked_ips(self, bucket_name, key_name, output_key_name):
+ local_file_path = TMP_DIR + key_name.split('/')[-1] + '_REMOTE.json'
+ self.s3_util.download_file_from_s3(bucket_name, output_key_name, local_file_path)
+
+ remote_outstanding_requesters = {
+ 'general': {},
+ 'uriList': {}
+ }
+
+ with open(local_file_path, 'r') as file_content:
+ remote_outstanding_requesters = json.loads(file_content.read())
+ remove(local_file_path)
+
+ return remote_outstanding_requesters
+
+
+ def iterate_general_list_for_existing_ip(self, k, v, outstanding_requesters, utc_now_timestamp_str):
+ self.log.info(
+ "[lambda_log_parser: iterate_general_list_for_existing_ip] \
+ Updating general data of BLOCK %s rule" % k)
+
+ outstanding_requesters['general'][k]['updated_at'] = utc_now_timestamp_str
+ if v['max_counter_per_min'] > outstanding_requesters['general'][k]['max_counter_per_min']:
+ outstanding_requesters['general'][k]['max_counter_per_min'] = v['max_counter_per_min']
+
+ return outstanding_requesters
+
+
+ def iterate_general_list_for_new_ip(self, k, v, threshold, outstanding_requesters,
+ utc_now_timestamp, force_update):
+ utc_prev_updated_at = datetime.datetime.strptime(v['updated_at'],
+ FORMAT_DATE_TIME).astimezone(datetime.timezone.utc)
+ total_diff_min = ((utc_now_timestamp - utc_prev_updated_at).total_seconds()) / 60
+
+ if v['max_counter_per_min'] < self.config['general'][threshold]:
+ force_update = True
+ self.log.info(
+ "[lambda_log_parser: merge_general_outstanding_requesters] \
+ %s is bellow the current general threshold" % k)
+
+ elif total_diff_min < self.config['general']['blockPeriod']:
+ self.log.debug("[merge_general_outstanding_requesters] Keeping %s in general" % k)
+ outstanding_requesters['general'][k] = v
+
+ else:
+ force_update = True
+ self.log.info("[lambda_log_parser: merge_general_outstanding_requesters] \
+ %s expired in general" % k)
+
+ return outstanding_requesters, force_update
+
+
+ def merge_general_outstanding_requesters(self, threshold, remote_outstanding_requesters,
+ outstanding_requesters, utc_now_timestamp_str,
+ utc_now_timestamp, force_update):
+ try:
+ for k, v in remote_outstanding_requesters['general'].items():
+ try:
+ if k in outstanding_requesters['general'].keys():
+ self.iterate_general_list_for_existing_ip(
+ k, v, outstanding_requesters, utc_now_timestamp_str)
+
+ else:
+ remote_outstanding_requesters, force_update = \
+ self.iterate_general_list_for_new_ip(
+ k, v, threshold, outstanding_requesters, utc_now_timestamp, force_update)
+
+ except Exception as e:
+ self.log.error("[lambda_log_parser: merge_outstanding_requesters] Error merging general %s rule" % k)
+ self.log.error(str(e))
+ except Exception as e:
+ self.log.error("[lambda_log_parser: merge_outstanding_requesters] Failed to process general group.")
+ self.log.error(str(e))
+
+ return remote_outstanding_requesters, force_update
+
+
+ def iterate_urilist_for_existing_uri(self, uri, k, v, outstanding_requesters, utc_now_timestamp_str):
+ self.log.info(
+ "[lambda_log_parser: iterate_urilist_for_existing_uri] \
+ Updating uriList (%s) data of BLOCK %s rule" % (uri, k))
+
+ outstanding_requesters['uriList'][uri][k]['updated_at'] = utc_now_timestamp_str
+ if v['max_counter_per_min'] > outstanding_requesters['uriList'][uri][k]['max_counter_per_min']:
+ outstanding_requesters['uriList'][uri][k]['max_counter_per_min'] = v['max_counter_per_min']
+
+ return outstanding_requesters
+
+
+ def iterate_urilist_for_new_uri(self, uri, k, v, threshold, utc_now_timestamp,
+ outstanding_requesters, force_update):
+ utc_prev_updated_at = datetime.datetime.strptime(
+ v['updated_at'], FORMAT_DATE_TIME).astimezone(datetime.timezone.utc)
+ total_diff_min = ((utc_now_timestamp - utc_prev_updated_at).total_seconds()) / 60
+
+ if v['max_counter_per_min'] < self.config['uriList'][uri][threshold]:
+ force_update = True
+ self.log.info(
+ "[lambda_log_parser: iterate_urilist_for_new_uri] \
+ %s is bellow the current uriList (%s) threshold" % (
+ k, uri))
+
+ elif total_diff_min < self.config['general']['blockPeriod']:
+ self.log.debug(
+ "[lambda_log_parser: iterate_urilist_for_new_uri] \
+ Keeping %s in uriList (%s)" % (k, uri))
+
+ if uri not in outstanding_requesters['uriList'].keys():
+ outstanding_requesters['uriList'][uri] = {}
+
+ outstanding_requesters['uriList'][uri][k] = v
+
+ else:
+ force_update = True
+ self.log.info(
+ "[lambda_log_parser: iterate_urilist_for_new_uri] \
+ %s expired in uriList (%s)" % (k, uri))
+
+ return outstanding_requesters, force_update
+
+
+ def iterate_urilist(self, uri, threshold, remote_outstanding_requesters, outstanding_requesters,
+ utc_now_timestamp_str, utc_now_timestamp, force_update):
+ for k, v in remote_outstanding_requesters['uriList'][uri].items():
+ try:
+ if uri in outstanding_requesters['uriList'].keys() and k in \
+ outstanding_requesters['uriList'][uri].keys():
+
+ outstanding_requesters = self.iterate_urilist_for_existing_uri(
+ uri, k, v, outstanding_requesters, utc_now_timestamp_str)
+
+ else:
+ outstanding_requesters, force_update = self.iterate_urilist_for_new_uri(
+ uri, k, v, threshold, utc_now_timestamp,
+ outstanding_requesters, force_update)
+
+ except Exception:
+ self.log.error(
+ "[lambda_log_parser: iterate_urilist] Error merging uriList (%s) %s rule" % (uri, k))
+
+ return outstanding_requesters, force_update
+
+
+ def merge_urilist_outstanding_requesters(self, threshold, remote_outstanding_requesters, outstanding_requesters,
+ utc_now_timestamp_str, utc_now_timestamp, force_update):
+ try:
+ if 'uriList' not in self.config or len(self.config['uriList']) == 0:
+ force_update = True
+ self.log.info(
+ "[lambda_log_parser: merge_urilist_outstanding_requesters] Current config file does not contain uriList anymore")
+ else:
+ for uri in remote_outstanding_requesters['uriList'].keys():
+ if 'ignoredSufixes' in self.config['general'] and uri.endswith(
+ tuple(self.config['general']['ignoredSufixes'])):
+ force_update = True
+ self.log.info(
+ "[lambda_log_parser: merge_urilist_outstanding_requesters] %s is in current ignored suffixes list." % uri)
+ continue
+
+ outstanding_requesters, force_update = self.iterate_urilist(
+ uri, threshold, remote_outstanding_requesters, outstanding_requesters,
+ utc_now_timestamp_str, utc_now_timestamp, force_update)
+ except Exception:
+ self.log.error("[lambda_log_parser: merge_outstanding_requesters] Failed to process uriList group.")
+
+ return outstanding_requesters, force_update
+
+
+ def merge_outstanding_requesters(self, bucket_name, key_name, log_type, output_key_name, outstanding_requesters):
+ self.log.debug("[lambda_log_parser: merge_outstanding_requesters] Start")
+
+ force_update = False
+ need_update = False
+
+ # Get metadata of object key_name
+ response = self.s3_util.get_head_object(bucket_name, output_key_name)
+ if response is None:
+ self.log.info("[lambda_log_parser: merge_outstanding_requesters] No file to be merged.")
+ need_update = True
+ return outstanding_requesters, need_update
+
+ # --------------------------------------------------------------------------------------------------------------
+ self.log.info("[lambda_log_parser: merge_outstanding_requesters] Calculate Last Update Age")
+ # --------------------------------------------------------------------------------------------------------------
+ utc_now_timestamp, utc_now_timestamp_str, last_update_age = self.calculate_last_update_age(response)
+
+ # --------------------------------------------------------------------------------------------------------------
+ self.log.info("[lambda_log_parser: merge_outstanding_requesters] Download current blocked IPs")
+ # --------------------------------------------------------------------------------------------------------------
+ remote_outstanding_requesters = self.get_current_blocked_ips(bucket_name, key_name, output_key_name)
+
+ # ----------------------------------------------------------------------------------------------------------
+ self.log.info("[lambda_log_parser: merge_outstanding_requesters] Process outstanding requesters files")
+ # ----------------------------------------------------------------------------------------------------------
+ threshold = 'requestThreshold' if log_type == 'waf' else "errorThreshold"
+ if 'general' in remote_outstanding_requesters:
+ remote_outstanding_requesters, force_update = self.merge_general_outstanding_requesters(
+ threshold, remote_outstanding_requesters, outstanding_requesters,
+ utc_now_timestamp_str, utc_now_timestamp, force_update)
+ if 'uriList' in remote_outstanding_requesters:
+ outstanding_requesters, force_update = self.merge_urilist_outstanding_requesters(
+ threshold, remote_outstanding_requesters, outstanding_requesters,
+ utc_now_timestamp_str, utc_now_timestamp, force_update)
+
+ need_update = (force_update or
+ last_update_age > int(os.getenv('MAX_AGE_TO_UPDATE')) or
+ len(outstanding_requesters['general']) > 0 or
+ len(outstanding_requesters['uriList']) > 0)
+
+ self.log.debug("[lambda_log_parser: merge_outstanding_requesters] End")
+ return outstanding_requesters, need_update
+
+
+ def write_output(self, bucket_name, key_name, output_key_name, outstanding_requesters):
+ self.log.debug("[lambda_log_parser: write_output] Start")
+
+ try:
+ current_data = TMP_DIR + key_name.split('/')[-1] + '_LOCAL.json'
+ with open(current_data, 'w') as outfile:
+ json.dump(outstanding_requesters, outfile)
+
+ self.s3_util.upload_file_to_s3(current_data, bucket_name, output_key_name)
+ remove(current_data)
+
+ except Exception as e:
+ self.log.error("[lambda_log_parser: write_output] Error to write output file")
+ self.log.error(e)
+
+ self.log.debug("[lambda_log_parser: write_output] End")
+
+
+ def merge_lists(self, outstanding_requesters):
+ self.log.debug("[lambda_log_parser: merge_lists] Start to merge general and uriList into a single list")
+
+ unified_outstanding_requesters = outstanding_requesters['general']
+ for uri in outstanding_requesters['uriList'].keys():
+ for k in outstanding_requesters['uriList'][uri].keys():
+ if (k not in unified_outstanding_requesters.keys() or
+ outstanding_requesters['uriList'][uri][k]['max_counter_per_min'] >
+ unified_outstanding_requesters[k]['max_counter_per_min']):
+ unified_outstanding_requesters[k] = outstanding_requesters['uriList'][uri][k]
+
+ self.log.debug("[lambda_log_parser: merge_lists] End")
+ return unified_outstanding_requesters
+
+
+ def truncate_list(self, unified_outstanding_requesters):
+ self.log.debug("[lambda_log_parser: truncate_list] " +
+ "Start to truncate [if necessary] list to respect WAF ip range limit")
+
+ ip_range_limit = int(os.getenv('LIMIT_IP_ADDRESS_RANGES_PER_IP_MATCH_CONDITION'))
+ if len(unified_outstanding_requesters) > ip_range_limit:
+ ordered_unified_outstanding_requesters = sorted(
+ unified_outstanding_requesters.items(),
+ key=lambda kv: kv[1]['max_counter_per_min'], reverse=True)
+ unified_outstanding_requesters = {}
+ for key, value in ordered_unified_outstanding_requesters:
+ if counter < ip_range_limit:
+ unified_outstanding_requesters[key] = value
+ counter += 1
+ else:
+ break
+
+ self.log.debug("[lambda_log_parser: truncate_list] End")
+ return unified_outstanding_requesters
+
+
+ def build_ip_list_to_block(self, unified_outstanding_requesters):
+ self.log.debug("[lambda_log_parser: truncate_list] Start to build list of ips to be blocked")
+
+ addresses_v4 = []
+ addresses_v6 = []
+
+ for k in unified_outstanding_requesters.keys():
+ ip_type = self.waflib.which_ip_version(self.log, k)
+ source_ip = self.waflib.set_ip_cidr(self.log, k)
+
+ if ip_type == "IPV4":
+ addresses_v4.append(source_ip)
+ elif ip_type == "IPV6":
+ addresses_v6.append(source_ip)
+
+ self.log.debug("[lambda_log_parser: truncate_list] End")
+ return addresses_v4, addresses_v6
+
+
+ def update_ip_set(self, ip_set_type, outstanding_requesters):
+ self.log.info("[update_ip_set] Start")
+
+ # With wafv2 api we need to pass the scope, name and arn of an IPSet to manipulate the Address list
+ # We also can only put source_ips in the appropriate IPSets based on IP version
+ # Depending on the ip_set_type, we choose the appropriate set of IPSets and Names
+
+ # initialize as SCANNER_PROBES IPSets
+ ipset_name_v4 = None
+ ipset_name_v6 = None
+ ipset_arn_v4 = None
+ ipset_arn_v6 = None
+
+ # switch if type of IPSets are HTTP_FLOOD
+ if ip_set_type == self.flood:
+ ipset_name_v4 = os.getenv('IP_SET_NAME_HTTP_FLOODV4')
+ ipset_name_v6 = os.getenv('IP_SET_NAME_HTTP_FLOODV6')
+ ipset_arn_v4 = os.getenv('IP_SET_ID_HTTP_FLOODV4')
+ ipset_arn_v6 = os.getenv('IP_SET_ID_HTTP_FLOODV6')
+ elif ip_set_type == self.scanners:
+ ipset_name_v4 = os.getenv('IP_SET_NAME_SCANNERS_PROBESV4')
+ ipset_name_v6 = os.getenv('IP_SET_NAME_SCANNERS_PROBESV6')
+ ipset_arn_v4 = os.getenv('IP_SET_ID_SCANNERS_PROBESV4')
+ ipset_arn_v6 = os.getenv('IP_SET_ID_SCANNERS_PROBESV6')
+
+ counter = 0
+ try:
+ if ipset_arn_v4 == None or ipset_arn_v6 == None:
+ self.log.info("[update_ip_set] Ignore process when ip_set_id is None")
+ return
+
+ # --------------------------------------------------------------------------------------------------------------
+ self.log.info("[update_ip_set] Merge general and uriList into a single list")
+ # --------------------------------------------------------------------------------------------------------------
+ unified_outstanding_requesters = self.merge_lists(outstanding_requesters)
+
+ # --------------------------------------------------------------------------------------------------------------
+ self.log.info("[update_ip_set] Truncate [if necessary] list to respect WAF limit")
+ # --------------------------------------------------------------------------------------------------------------
+ unified_outstanding_requesters = self.truncate_list(unified_outstanding_requesters)
+
+ # --------------------------------------------------------------------------------------------------------------
+ self.log.info("[update_ip_set] Block remaining outstanding requesters")
+ # --------------------------------------------------------------------------------------------------------------
+ addresses_v4, addresses_v6 = self.build_ip_list_to_block(unified_outstanding_requesters)
+
+ # --------------------------------------------------------------------------------------------------------------
+ self.log.info("[ update_ip_set] Commit changes in WAF IP set")
+ # --------------------------------------------------------------------------------------------------------------
+ response = self.waflib.update_ip_set(self.log, self.scope, ipset_name_v4, ipset_arn_v4, addresses_v4)
+ self.log.debug("[update_ip_set] update ipsetv4 response: \n%s" % response)
+
+ # Sleep for a few seconds to mitigate AWS WAF Update API call throttling issue
+ sleep(self.delay_between_updates)
+
+ response = self.waflib.update_ip_set(self.log, self.scope, ipset_name_v6, ipset_arn_v6, addresses_v6)
+ self.log.debug("[update_ip_set] update ipsetv6 response: \n%s" % response)
+
+ except Exception as error:
+ self.log.error(str(error))
+ self.log.error("[update_ip_set] Error to update waf ip set")
+
+ self.log.info("[update_ip_set] End")
+ return counter
+
+
+ def process_log_file(self, bucket_name, key_name, conf_filename, output_filename, log_type, ip_set_type):
+ self.log.debug("[lambda_log_parser: process_log_file] Start")
+
+ # --------------------------------------------------------------------------------------------------------------
+ self.log.info("[lambda_log_parser: process_log_file] Reading input data and get outstanding requesters")
+ # --------------------------------------------------------------------------------------------------------------
+ self.config = self.s3_util.read_json_config_file_from_s3(bucket_name, conf_filename)
+ counter, outstanding_requesters = self.parse_log_file(bucket_name, key_name, log_type)
+ outstanding_requesters = self.get_outstanding_requesters(log_type, counter, outstanding_requesters)
+ outstanding_requesters, need_update = self.merge_outstanding_requesters(
+ bucket_name, key_name, log_type, output_filename, outstanding_requesters)
+
+ if need_update:
+ # ----------------------------------------------------------------------------------------------------------
+ self.log.info("[process_log_file] Update new blocked requesters list to S3")
+ # ----------------------------------------------------------------------------------------------------------
+ self.write_output(bucket_name, key_name, output_filename, outstanding_requesters)
+
+ # ----------------------------------------------------------------------------------------------------------
+ self.log.info("[process_log_file] Update WAF IP Set")
+ # ----------------------------------------------------------------------------------------------------------
+ self.update_ip_set(ip_set_type, outstanding_requesters)
+
+ else:
+ # ----------------------------------------------------------------------------------------------------------
+ self.log.info("[process_log_file] No changes identified")
+ # ----------------------------------------------------------------------------------------------------------
+
+ self.log.debug('[process_log_file] End')
\ No newline at end of file
diff --git a/source/log_parser/log-parser.py b/source/log_parser/log-parser.py
deleted file mode 100644
index 3da7f14e..00000000
--- a/source/log_parser/log-parser.py
+++ /dev/null
@@ -1,979 +0,0 @@
-######################################################################################################################
-# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. #
-# #
-# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
-# with the License. A copy of the License is located at #
-# #
-# http://www.apache.org/licenses/LICENSE-2.0 #
-# #
-# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
-# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
-# and limitations under the License. #
-######################################################################################################################
-
-import boto3
-import csv
-import gzip
-import json
-import logging
-import datetime
-import os
-from os import environ, remove
-from botocore.config import Config
-from time import sleep
-from urllib.parse import unquote_plus
-from urllib.parse import urlparse
-import requests
-
-from lib.waflibv2 import WAFLIBv2
-from lib.solution_metrics import send_metrics
-from build_athena_queries import build_athena_query_for_app_access_logs, \
- build_athena_query_for_waf_logs
-from lib.boto3_util import create_client, create_resource
-
-logging.getLogger().debug('Loading function')
-
-api_call_num_retries = 5
-max_descriptors_per_ip_set_update = 500
-delay_between_updates = 5
-scope = os.getenv('SCOPE')
-scanners = 1
-flood = 2
-
-# CloudFront Access Logs
-# http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html#BasicDistributionFileFormat
-LINE_FORMAT_CLOUD_FRONT = {
- 'delimiter': '\t',
- 'date': 0,
- 'time': 1,
- 'source_ip': 4,
- 'uri': 7,
- 'code': 8
-}
-# ALB Access Logs
-# http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html
-LINE_FORMAT_ALB = {
- 'delimiter': ' ',
- 'timestamp': 1,
- 'source_ip': 3,
- 'code': 9, # GitHub issue #44. Changed from elb_status_code to target_status_code.
- 'uri': 13
-}
-
-waflib = WAFLIBv2()
-config = {}
-
-
-# ======================================================================================================================
-# Auxiliary Functions
-# ======================================================================================================================
-def update_ip_set(log, ip_set_type, outstanding_requesters):
- log.info('[update_ip_set] Start')
-
- # With wafv2 api we need to pass the scope, name and arn of an IPSet to manipulate the Address list
- # We also can only put source_ips in the appropriate IPSets based on IP version
- # Depending on the ip_set_type, we choose the appropriate set of IPSets and Names
-
- # initialize as SCANNER_PROBES IPSets
- ipset_name_v4 = None
- ipset_name_v6 = None
- ipset_arn_v4 = None
- ipset_arn_v6 = None
-
- # switch if type of IPSets are HTTP_FLOOD
- if ip_set_type == flood:
- ipset_name_v4 = os.getenv('IP_SET_NAME_HTTP_FLOODV4')
- ipset_name_v6 = os.getenv('IP_SET_NAME_HTTP_FLOODV6')
- ipset_arn_v4 = os.getenv('IP_SET_ID_HTTP_FLOODV4')
- ipset_arn_v6 = os.getenv('IP_SET_ID_HTTP_FLOODV6')
-
- if ip_set_type == scanners:
- ipset_name_v4 = os.getenv('IP_SET_NAME_SCANNERS_PROBESV4')
- ipset_name_v6 = os.getenv('IP_SET_NAME_SCANNERS_PROBESV6')
- ipset_arn_v4 = os.getenv('IP_SET_ID_SCANNERS_PROBESV4')
- ipset_arn_v6 = os.getenv('IP_SET_ID_SCANNERS_PROBESV6')
-
- counter = 0
- try:
- if ipset_arn_v4 == None or ipset_arn_v6 == None:
- log.info("[update_ip_set] Ignore process when ip_set_id is None")
- return
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[update_ip_set] \tMerge general and uriList into a single list")
- # --------------------------------------------------------------------------------------------------------------
- unified_outstanding_requesters = outstanding_requesters['general']
- for uri in outstanding_requesters['uriList'].keys():
- for k in outstanding_requesters['uriList'][uri].keys():
- if (k not in unified_outstanding_requesters.keys() or
- outstanding_requesters['uriList'][uri][k]['max_counter_per_min'] >
- unified_outstanding_requesters[k]['max_counter_per_min']):
- unified_outstanding_requesters[k] = outstanding_requesters['uriList'][uri][k]
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[update_ip_set] \tTruncate [if necessary] list to respect WAF limit")
- # --------------------------------------------------------------------------------------------------------------
- if len(unified_outstanding_requesters) > int(os.getenv('LIMIT_IP_ADDRESS_RANGES_PER_IP_MATCH_CONDITION')):
- ordered_unified_outstanding_requesters = sorted(unified_outstanding_requesters.items(),
- key=lambda kv: kv[1]['max_counter_per_min'], reverse=True)
- unified_outstanding_requesters = {}
- for key, value in ordered_unified_outstanding_requesters:
- if counter < int(os.getenv('LIMIT_IP_ADDRESS_RANGES_PER_IP_MATCH_CONDITION')):
- unified_outstanding_requesters[key] = value
- counter += 1
- else:
- break
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[update_ip_set] \tBlock remaining outstanding requesters")
- # --------------------------------------------------------------------------------------------------------------
- addresses_v4 = []
- addresses_v6 = []
-
- for k in unified_outstanding_requesters.keys():
- ip_type = waflib.which_ip_version(log, k)
- source_ip = waflib.set_ip_cidr(log, k)
-
- if ip_type == "IPV4":
- addresses_v4.append(source_ip)
- elif ip_type == "IPV6":
- addresses_v6.append(source_ip)
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[update_ip_set] \tCommit changes in WAF IP set")
- # --------------------------------------------------------------------------------------------------------------
- response = waflib.update_ip_set(log, scope, ipset_name_v4, ipset_arn_v4, addresses_v4)
-
- # Sleep for a few seconds to mitigate AWS WAF Update API call throttling issue
- sleep(delay_between_updates)
-
- response = waflib.update_ip_set(log, scope, ipset_name_v6, ipset_arn_v6, addresses_v6)
-
- except Exception as error:
- log.error(str(error))
- log.error("[update_ip_set] Error to update waf ip set")
-
- log.info('[update_ip_set] End')
- return counter
-
-
-def send_anonymous_usage_data(log):
- try:
- if 'SEND_ANONYMOUS_USAGE_DATA' not in environ or os.getenv('SEND_ANONYMOUS_USAGE_DATA').lower() != 'yes':
- return
-
- log.info("[send_anonymous_usage_data] Start")
-
- cw = create_client('cloudwatch')
- usage_data = {
- "data_type": "log_parser",
- "scanners_probes_set_size": 0,
- "http_flood_set_size": 0,
- "allowed_requests": 0,
- "blocked_requests_all": 0,
- "blocked_requests_scanners_probes": 0,
- "blocked_requests_http_flood": 0,
- "allowed_requests_WAFWebACL": 0,
- "blocked_requests_WAFWebACL": 0,
- "waf_type": os.getenv('LOG_TYPE')
- }
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[send_anonymous_usage_data] Get num allowed requests")
- # --------------------------------------------------------------------------------------------------------------
- try:
- response = cw.get_metric_statistics(
- MetricName='AllowedRequests',
- Namespace='AWS/WAFV2',
- Statistics=['Sum'],
- Period=300,
- StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=300),
- EndTime=datetime.datetime.utcnow(),
- Dimensions=[
- {
- "Name": "Rule",
- "Value": "ALL"
- },
- {
- "Name": "WebACL",
- "Value": os.getenv('STACK_NAME')
- },
- {
- "Name": "Region",
- "Value": os.getenv('AWS_REGION')
- }
- ]
- )
- if len(response['Datapoints']):
- usage_data['allowed_requests'] = response['Datapoints'][0]['Sum']
-
- except Exception as error:
- log.debug("[send_anonymous_usage_data] Failed to get Num Allowed Requests")
- log.debug(str(error))
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[send_anonymous_usage_data] Get num blocked requests - all rules")
- # --------------------------------------------------------------------------------------------------------------
- try:
- response = cw.get_metric_statistics(
- MetricName='BlockedRequests',
- Namespace='AWS/WAFV2',
- Statistics=['Sum'],
- Period=300,
- StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=300),
- EndTime=datetime.datetime.utcnow(),
- Dimensions=[
- {
- "Name": "Rule",
- "Value": "ALL"
- },
- {
- "Name": "WebACL",
- "Value": os.getenv('STACK_NAME')
- },
- {
- "Name": "Region",
- "Value": os.getenv('AWS_REGION')
- }
- ]
- )
-
- if len(response['Datapoints']):
- usage_data['blocked_requests_all'] = response['Datapoints'][0]['Sum']
-
- except Exception as error:
- log.info("[send_anonymous_usage_data] Failed to get num blocked requests - all rules")
- log.error(str(error))
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[send_anonymous_usage_data] Get scanners probes data")
- # --------------------------------------------------------------------------------------------------------------
- if 'IP_SET_ID_SCANNERS_PROBESV4' in environ or 'IP_SET_ID_SCANNERS_PROBESV6' in environ:
- try:
- countv4 = 0
- response = waflib.get_ip_set(log, scope,
- os.getenv('IP_SET_NAME_SCANNERS_PROBESV4'),
- os.getenv('IP_SET_ID_SCANNERS_PROBESV4')
- )
- log.info(response)
- if response is not None:
- countv4 = len(response['IPSet']['Addresses'])
- log.info("Scanner Probes IPV4 address Count: %s", countv4)
-
- countv6 = 0
- response = waflib.get_ip_set(log, scope,
- os.getenv('IP_SET_NAME_SCANNERS_PROBESV6'),
- os.getenv('IP_SET_ID_SCANNERS_PROBESV6')
- )
- log.info(response)
- if response is not None:
- countv6 = len(response['IPSet']['Addresses'])
- log.info("Scanner Probes IPV6 address Count: %s", countv6)
-
- usage_data['scanners_probes_set_size'] = str(countv4 + countv6)
-
- response = cw.get_metric_statistics(
- MetricName='BlockedRequests',
- Namespace='AWS/WAFV2',
- Statistics=['Sum'],
- Period=300,
- StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=300),
- EndTime=datetime.datetime.utcnow(),
- Dimensions=[
- {
- "Name": "Rule",
- "Value": os.getenv('METRIC_NAME_PREFIX') + 'ScannersProbesRule'
- },
- {
- "Name": "WebACL",
- "Value": os.getenv('STACK_NAME')
- },
- {
- "Name": "Region",
- "Value": os.getenv('AWS_REGION')
- }
- ]
- )
-
- if len(response['Datapoints']):
- usage_data['blocked_requests_scanners_probes'] = response['Datapoints'][0]['Sum']
-
- except Exception as error:
- log.debug("[send_anonymous_usage_data] Failed to get scanners probes data")
- log.debug(str(error))
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[send_anonymous_usage_data] Get HTTP flood data")
- # --------------------------------------------------------------------------------------------------------------
- if 'IP_SET_ID_HTTP_FLOODV4' in environ or 'IP_SET_ID_HTTP_FLOODV6' in environ:
- try:
- countv4 = 0
- response = waflib.get_ip_set(log, scope,
- os.getenv('IP_SET_NAME_HTTP_FLOODV4'),
- os.getenv('IP_SET_ID_HTTP_FLOODV4')
- )
- log.info(response)
- if response is not None:
- countv4 = len(response['IPSet']['Addresses'])
- log.info("HTTP Flood IPV4 address Count: %s", countv4)
-
- countv6 = 0
- response = waflib.get_ip_set(log, scope,
- os.getenv('IP_SET_NAME_HTTP_FLOODV6'),
- os.getenv('IP_SET_ID_HTTP_FLOODV6')
- )
- log.info(response)
- if response is not None:
- countv6 = len(response['IPSet']['Addresses'])
- log.info("HTTP Flood IPV6 address Count: %s", countv6)
-
- usage_data['http_flood_set_size'] = str(countv4 + countv6)
-
- response = cw.get_metric_statistics(
- MetricName='BlockedRequests',
- Namespace='AWS/WAFV2',
- Statistics=['Sum'],
- Period=300,
- StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=300),
- EndTime=datetime.datetime.utcnow(),
- Dimensions=[
- {
- "Name": "Rule",
- "Value": os.getenv('METRIC_NAME_PREFIX') + 'HttpFloodRegularRule'
- },
- {
- "Name": "WebACL",
- "Value": os.getenv('STACK_NAME')
- },
- {
- "Name": "Region",
- "Value": os.getenv('AWS_REGION')
- }
- ]
- )
-
- if len(response['Datapoints']):
- usage_data['blocked_requests_http_flood'] = response['Datapoints'][0]['Sum']
-
- except Exception as error:
- log.info("[send_anonymous_usage_data] Failed to get HTTP flood data")
- log.error(str(error))
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[send_anonymous_usage_data] Get num allowed requests - WAF Web ACL")
- # --------------------------------------------------------------------------------------------------------------
- try:
- response = cw.get_metric_statistics(
- MetricName='AllowedRequests',
- Namespace='AWS/WAFV2',
- Statistics=['Sum'],
- Period=300,
- StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=300),
- EndTime=datetime.datetime.utcnow(),
- Dimensions=[
- {
- "Name": "Rule",
- "Value": os.getenv('METRIC_NAME_PREFIX') + 'WAFWebACL'
- },
- {
- "Name": "WebACL",
- "Value": os.getenv('STACK_NAME')
- },
- {
- "Name": "Region",
- "Value": os.getenv('AWS_REGION')
- }
- ]
- )
-
- if len(response['Datapoints']):
- usage_data['allowed_requests_WAFWebACL'] = response['Datapoints'][0]['Sum']
-
- except Exception as error:
- log.info("[send_anonymous_usage_data] Failed to get num blocked requests - all rules")
- log.error(str(error))
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[send_anonymous_usage_data] Get num blocked requests - WAF Web ACL")
- # --------------------------------------------------------------------------------------------------------------
- try:
- response = cw.get_metric_statistics(
- MetricName='BlockedRequests',
- Namespace='AWS/WAFV2',
- Statistics=['Sum'],
- Period=300,
- StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=300),
- EndTime=datetime.datetime.utcnow(),
- Dimensions=[
- {
- "Name": "Rule",
- "Value": os.getenv('METRIC_NAME_PREFIX') + 'WAFWebACL'
- },
- {
- "Name": "WebACL",
- "Value": os.getenv('STACK_NAME')
- },
- {
- "Name": "Region",
- "Value": os.getenv('AWS_REGION')
- }
- ]
- )
-
- if len(response['Datapoints']):
- usage_data['blocked_requests_WAFWebACL'] = response['Datapoints'][0]['Sum']
-
- except Exception as error:
- log.info("[send_anonymous_usage_data] Failed to get num blocked requests - all rules")
- log.error(str(error))
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[send_anonymous_usage_data] Send Data")
- # --------------------------------------------------------------------------------------------------------------
- response = send_metrics(data=usage_data)
- response_code = response.status_code
- log.info('[send_anonymous_usage_data] Response Code: {}'.format(response_code))
- log.info("[send_anonymous_usage_data] End")
-
- except Exception as error:
- log.info("[send_anonymous_usage_data] Failed to send data")
- log.error(str(error))
-
-
-# ======================================================================================================================
-# Athena Log Parser
-# ======================================================================================================================
-def process_athena_scheduler_event(log, event):
- log.debug('[process_athena_scheduler_event] Start')
-
- log_type = str(environ['LOG_TYPE'].upper())
-
- # Execute athena query for CloudFront or ALB logs
- if event['resourceType'] == 'LambdaAthenaAppLogParser' \
- and (log_type == 'CLOUDFRONT' or log_type == 'ALB'):
- execute_athena_query(log, log_type, event)
-
- # Execute athena query for WAF logs
- if event['resourceType'] == 'LambdaAthenaWAFLogParser':
- execute_athena_query(log, 'WAF', event)
-
- log.debug('[process_athena_scheduler_event] End')
-
-
-def execute_athena_query(log, log_type, event):
- log.debug('[execute_athena_query] Start')
-
- athena_client = create_client('athena')
- s3_output = "s3://%s/athena_results/" % event['accessLogBucket']
- database_name = event['glueAccessLogsDatabase']
-
- # Dynamically build query string using partition
- # for CloudFront or ALB logs
- if log_type == 'CLOUDFRONT' or log_type == 'ALB':
- query_string = build_athena_query_for_app_access_logs(
- log,
- log_type,
- event['glueAccessLogsDatabase'],
- event['glueAppAccessLogsTable'],
- datetime.datetime.utcnow(),
- int(environ['WAF_BLOCK_PERIOD']),
- int(environ['ERROR_THRESHOLD'])
- )
- else: # Dynamically build query string using partition for WAF logs
- query_string = build_athena_query_for_waf_logs(
- log,
- event['glueAccessLogsDatabase'],
- event['glueWafAccessLogsTable'],
- datetime.datetime.utcnow(),
- int(environ['WAF_BLOCK_PERIOD']),
- int(environ['REQUEST_THRESHOLD'])
- )
-
- response = athena_client.start_query_execution(
- QueryString=query_string,
- QueryExecutionContext={'Database': database_name},
- ResultConfiguration={
- 'OutputLocation': s3_output,
- 'EncryptionConfiguration': {
- 'EncryptionOption': 'SSE_S3'
- }
- },
- WorkGroup=event['athenaWorkGroup']
- )
-
- log.info("[execute_athena_query] Query Execution Response: {}".format(response))
- log.info('[execute_athena_query] End')
-
-
-def process_athena_result(log, bucket_name, key_name, ip_set_type):
- log.debug('[process_athena_result] Start')
-
- try:
- # --------------------------------------------------------------------------------------------------------------
- log.info("[process_athena_result] \tDownload file from S3")
- # --------------------------------------------------------------------------------------------------------------
- local_file_path = '/tmp/' + key_name.split('/')[-1]
- s3 = create_client('s3')
- s3.download_file(bucket_name, key_name, local_file_path)
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[process_athena_result] \tRead file content")
- # --------------------------------------------------------------------------------------------------------------
- outstanding_requesters = {
- 'general': {},
- 'uriList': {}
- }
- utc_now_timestamp_str = datetime.datetime.now(datetime.timezone.utc).strftime("%Y-%m-%d %H:%M:%S %Z%z")
- with open(local_file_path, 'r') as csvfile:
- reader = csv.DictReader(csvfile)
- for row in reader:
- # max_counter_per_min is set as 1 just to reuse lambda log parser data structure
- # and reuse update_ip_set.
- outstanding_requesters['general'][row['client_ip']] = {
- "max_counter_per_min": row['max_counter_per_min'],
- "updated_at": utc_now_timestamp_str
- }
- remove(local_file_path)
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[process_athena_result] \tUpdate WAF IP Sets")
- # --------------------------------------------------------------------------------------------------------------
- update_ip_set(log,ip_set_type, outstanding_requesters)
-
- except Exception:
- log.error("[process_athena_result] \tError to read input file")
-
- log.debug('[process_athena_result] End')
-
-
-# ======================================================================================================================
-# Lambda Log Parser
-# ======================================================================================================================
-def load_configurations(log, bucket_name, key_name):
- log.debug('[load_configurations] Start')
-
- try:
- s3_resource = create_resource('s3')
- file_obj = s3_resource.Object(bucket_name, key_name)
- file_content = file_obj.get()['Body'].read()
-
- global config
- config = json.loads(file_content)
-
- except Exception as e:
- log.error("[load_configurations] \tError to read config file")
- raise e
-
- log.debug('[load_configurations] End')
-
-
-def get_outstanding_requesters(log, bucket_name, key_name, log_type):
- log.debug('[get_outstanding_requesters] Start')
-
- counter = {
- 'general': {},
- 'uriList': {}
- }
- outstanding_requesters = {
- 'general': {},
- 'uriList': {}
- }
-
- try:
- # --------------------------------------------------------------------------------------------------------------
- log.info("[get_outstanding_requesters] \tDownload file from S3")
- # --------------------------------------------------------------------------------------------------------------
- local_file_path = '/tmp/' + key_name.split('/')[-1]
- s3 = create_client('s3')
- s3.download_file(bucket_name, key_name, local_file_path)
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[get_outstanding_requesters] \tRead file content")
- # --------------------------------------------------------------------------------------------------------------
- error_count = 0
- with gzip.open(local_file_path, 'r') as content:
- for line in content:
- try:
- request_key = ""
- uri = ""
- return_code_index = None
-
- if log_type == 'waf':
- line = line.decode() # Remove the b in front of each field
- line_data = json.loads(str(line))
-
- request_key = datetime.datetime.fromtimestamp(int(line_data['timestamp']) / 1000.0).isoformat(
- sep='T', timespec='minutes')
- request_key += ' ' + line_data['httpRequest']['clientIp']
- uri = urlparse(line_data['httpRequest']['uri']).path
-
- elif log_type == 'alb':
- line = line.decode('utf8')
- if line.startswith('#'):
- continue
-
- line_data = line.split(LINE_FORMAT_ALB['delimiter'])
- request_key = line_data[LINE_FORMAT_ALB['timestamp']].rsplit(':', 1)[0]
- request_key += ' ' + line_data[LINE_FORMAT_ALB['source_ip']].rsplit(':', 1)[0]
- return_code_index = LINE_FORMAT_ALB['code']
- uri = urlparse(line_data[LINE_FORMAT_ALB['uri']]).path
-
- elif log_type == 'cloudfront':
- line = line.decode('utf8')
- if line.startswith('#'):
- continue
-
- line_data = line.split(LINE_FORMAT_CLOUD_FRONT['delimiter'])
- request_key = line_data[LINE_FORMAT_CLOUD_FRONT['date']]
- request_key += ' ' + line_data[LINE_FORMAT_CLOUD_FRONT['time']][:-3]
- request_key += ' ' + line_data[LINE_FORMAT_CLOUD_FRONT['source_ip']]
- return_code_index = LINE_FORMAT_CLOUD_FRONT['code']
- uri = urlparse(line_data[LINE_FORMAT_CLOUD_FRONT['uri']]).path
-
- else:
- return outstanding_requesters
-
- if 'ignoredSufixes' in config['general'] and uri.endswith(
- tuple(config['general']['ignoredSufixes'])):
- log.debug(
- "[get_outstanding_requesters] \t\tSkipping line %s. Included in ignoredSufixes." % line)
- continue
-
- if return_code_index == None or line_data[return_code_index] in config['general']['errorCodes']:
- if request_key in counter['general'].keys():
- counter['general'][request_key] += 1
- else:
- counter['general'][request_key] = 1
-
- if 'uriList' in config and uri in config['uriList'].keys():
- if uri not in counter['uriList'].keys():
- counter['uriList'][uri] = {}
-
- if request_key in counter['uriList'][uri].keys():
- counter['uriList'][uri][request_key] += 1
- else:
- counter['uriList'][uri][request_key] = 1
-
- except Exception as e:
- error_count += 1
- log.error("[get_outstanding_requesters] \t\tError to process line: %s" % line)
- log.error(str(e))
- if error_count == 5: #Allow 5 errors before stopping the function execution
- raise
- remove(local_file_path)
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[get_outstanding_requesters] \tKeep only outstanding requesters")
- # --------------------------------------------------------------------------------------------------------------
- threshold = 'requestThreshold' if log_type == 'waf' else "errorThreshold"
- utc_now_timestamp_str = datetime.datetime.now(datetime.timezone.utc).strftime("%Y-%m-%d %H:%M:%S %Z%z")
- for k, num_reqs in counter['general'].items():
- try:
- k = k.split(' ')[-1]
- if num_reqs >= config['general'][threshold]:
- if k not in outstanding_requesters['general'].keys() or num_reqs > \
- outstanding_requesters['general'][k]['max_counter_per_min']:
- outstanding_requesters['general'][k] = {
- 'max_counter_per_min': num_reqs,
- 'updated_at': utc_now_timestamp_str
- }
- except Exception as e:
- log.error(
- "[get_outstanding_requesters] \t\tError to process outstanding requester: %s" % k)
-
- for uri in counter['uriList'].keys():
- for k, num_reqs in counter['uriList'][uri].items():
- try:
- k = k.split(' ')[-1]
- if num_reqs >= config['uriList'][uri][threshold]:
- if uri not in outstanding_requesters['uriList'].keys():
- outstanding_requesters['uriList'][uri] = {}
-
- if k not in outstanding_requesters['uriList'][uri].keys() or num_reqs > \
- outstanding_requesters['uriList'][uri][k]['max_counter_per_min']:
- outstanding_requesters['uriList'][uri][k] = {
- 'max_counter_per_min': num_reqs,
- 'updated_at': utc_now_timestamp_str
- }
- except Exception as e:
- log.error(
- "[get_outstanding_requesters] \t\tError to process outstanding requester: (%s) %s" % (uri, k))
-
- except Exception as e:
- log.error("[get_outstanding_requesters] \tError to read input file")
- log.error(e)
-
- log.debug('[get_outstanding_requesters] End')
- return outstanding_requesters
-
-
-def merge_outstanding_requesters(log, bucket_name, key_name, log_type, output_key_name, outstanding_requesters):
- log.debug('[merge_outstanding_requesters] Start')
-
- force_update = False
- need_update = False
- s3 = create_client('s3')
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[merge_outstanding_requesters] \tCalculate Last Update Age")
- # --------------------------------------------------------------------------------------------------------------
- response = None
- try:
- response = s3.head_object(Bucket=bucket_name, Key=output_key_name)
- except Exception:
- log.info('[merge_outstanding_requesters] No file to be merged.')
- need_update = True
- return outstanding_requesters, need_update
-
- utc_last_modified = response['LastModified'].astimezone(datetime.timezone.utc)
- utc_now_timestamp = datetime.datetime.now(datetime.timezone.utc)
-
- utc_now_timestamp_str = utc_now_timestamp.strftime("%Y-%m-%d %H:%M:%S %Z%z")
- last_update_age = int(((utc_now_timestamp - utc_last_modified).total_seconds()) / 60)
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[merge_outstanding_requesters] \tDownload current blocked IPs")
- # --------------------------------------------------------------------------------------------------------------
- local_file_path = '/tmp/' + key_name.split('/')[-1] + '_REMOTE.json'
- s3.download_file(bucket_name, output_key_name, local_file_path)
-
- # ----------------------------------------------------------------------------------------------------------
- log.info("[merge_outstanding_requesters] \tProcess outstanding requesters files")
- # ----------------------------------------------------------------------------------------------------------
- remote_outstanding_requesters = {
- 'general': {},
- 'uriList': {}
- }
- with open(local_file_path, 'r') as file_content:
- remote_outstanding_requesters = json.loads(file_content.read())
- remove(local_file_path)
-
- threshold = 'requestThreshold' if log_type == 'waf' else "errorThreshold"
- try:
- if 'general' in remote_outstanding_requesters:
- for k, v in remote_outstanding_requesters['general'].items():
- try:
- if k in outstanding_requesters['general'].keys():
- log.info(
- "[merge_outstanding_requesters] \t\tUpdating general data of BLOCK %s rule" % k)
- outstanding_requesters['general'][k]['updated_at'] = utc_now_timestamp_str
- if v['max_counter_per_min'] > outstanding_requesters['general'][k]['max_counter_per_min']:
- outstanding_requesters['general'][k]['max_counter_per_min'] = v['max_counter_per_min']
-
- else:
- utc_prev_updated_at = datetime.datetime.strptime(v['updated_at'],
- "%Y-%m-%d %H:%M:%S %Z%z").astimezone(
- datetime.timezone.utc)
- total_diff_min = ((utc_now_timestamp - utc_prev_updated_at).total_seconds()) / 60
-
- if v['max_counter_per_min'] < config['general'][threshold]:
- force_update = True
- log.info(
- "[merge_outstanding_requesters] \t\t%s is bellow the current general threshold" % k)
-
- elif total_diff_min < config['general']['blockPeriod']:
- log.debug("[merge_outstanding_requesters] \t\tKeeping %s in general" % k)
- outstanding_requesters['general'][k] = v
-
- else:
- force_update = True
- log.info("[merge_outstanding_requesters] \t\t%s expired in general" % k)
-
- except Exception:
- log.error("[merge_outstanding_requesters] \tError merging general %s rule" % k)
- except Exception:
- log.error('[merge_outstanding_requesters] Failed to process general group.')
-
- try:
- if 'uriList' in remote_outstanding_requesters:
- if 'uriList' not in config or len(config['uriList']) == 0:
- force_update = True
- log.info(
- "[merge_outstanding_requesters] \t\tCurrent config file does not contain uriList anymore")
- else:
- for uri in remote_outstanding_requesters['uriList'].keys():
- if 'ignoredSufixes' in config['general'] and uri.endswith(
- tuple(config['general']['ignoredSufixes'])):
- force_update = True
- log.info(
- "[merge_outstanding_requesters] \t\t%s is in current ignored sufixes list." % uri)
- continue
-
- for k, v in remote_outstanding_requesters['uriList'][uri].items():
- try:
- if uri in outstanding_requesters['uriList'].keys() and k in \
- outstanding_requesters['uriList'][uri].keys():
- log.info(
- "[merge_outstanding_requesters] \t\tUpdating uriList (%s) data of BLOCK %s rule" % (
- uri, k))
- outstanding_requesters['uriList'][uri][k]['updated_at'] = utc_now_timestamp_str
- if v['max_counter_per_min'] > outstanding_requesters['uriList'][uri][k][
- 'max_counter_per_min']:
- outstanding_requesters['uriList'][uri][k]['max_counter_per_min'] = v[
- 'max_counter_per_min']
-
- else:
- utc_prev_updated_at = datetime.datetime.strptime(v['updated_at'],
- "%Y-%m-%d %H:%M:%S %Z%z").astimezone(
- datetime.timezone.utc)
- total_diff_min = ((utc_now_timestamp - utc_prev_updated_at).total_seconds()) / 60
-
- if v['max_counter_per_min'] < config['uriList'][uri][threshold]:
- force_update = True
- log.info(
- "[merge_outstanding_requesters] \t\t%s is bellow the current uriList (%s) threshold" % (
- k, uri))
-
- elif total_diff_min < config['general']['blockPeriod']:
- log.debug(
- "[merge_outstanding_requesters] \t\tKeeping %s in uriList (%s)" % (k, uri))
-
- if uri not in outstanding_requesters['uriList'].keys():
- outstanding_requesters['uriList'][uri] = {}
-
- outstanding_requesters['uriList'][uri][k] = v
- else:
- force_update = True
- log.info(
- "[merge_outstanding_requesters] \t\t%s expired in uriList (%s)" % (k, uri))
-
- except Exception:
- log.error(
- "[merge_outstanding_requesters] \tError merging uriList (%s) %s rule" % (uri, k))
- except Exception:
- log.error('[merge_outstanding_requesters] Failed to process uriList group.')
-
- need_update = (force_update or
- last_update_age > int(os.getenv('MAX_AGE_TO_UPDATE')) or
- len(outstanding_requesters['general']) > 0 or
- len(outstanding_requesters['uriList']) > 0)
-
- log.debug('[merge_outstanding_requesters] End')
- return outstanding_requesters, need_update
-
-
-def write_output(log, bucket_name, key_name, output_key_name, outstanding_requesters):
- log.debug('[write_output] Start')
-
- try:
- current_data = '/tmp/' + key_name.split('/')[-1] + '_LOCAL.json'
- with open(current_data, 'w') as outfile:
- json.dump(outstanding_requesters, outfile)
-
- s3 = create_client('s3')
- s3.upload_file(current_data, bucket_name, output_key_name, ExtraArgs={'ContentType': "application/json"})
- remove(current_data)
-
- except Exception as e:
- log.error("[write_output] \tError to write output file")
- log.error(e)
-
- log.debug('[write_output] End')
-
-
-def process_log_file(log, bucket_name, key_name, conf_filename, output_filename, log_type, ip_set_type):
- log.debug('[process_log_file] Start')
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[process_log_file] \tReading input data and get outstanding requesters")
- # --------------------------------------------------------------------------------------------------------------
- load_configurations(log, bucket_name, conf_filename)
- outstanding_requesters = get_outstanding_requesters(log, bucket_name, key_name, log_type)
- outstanding_requesters, need_update = merge_outstanding_requesters(log, bucket_name, key_name, log_type, output_filename,
- outstanding_requesters)
-
- if need_update:
- # ----------------------------------------------------------------------------------------------------------
- log.info("[process_log_file] \tUpdate new blocked requesters list to S3")
- # ----------------------------------------------------------------------------------------------------------
- write_output(log, bucket_name, key_name, output_filename, outstanding_requesters)
-
- # ----------------------------------------------------------------------------------------------------------
- log.info("[process_log_file] \tUpdate WAF IP Set")
- # ----------------------------------------------------------------------------------------------------------
- update_ip_set(log, ip_set_type, outstanding_requesters)
-
- else:
- # ----------------------------------------------------------------------------------------------------------
- log.info("[process_log_file] \tNo changes identified")
- # ----------------------------------------------------------------------------------------------------------
-
- log.debug('[process_log_file] End')
-
-
-# ======================================================================================================================
-# Lambda Entry Point
-# ======================================================================================================================
-def lambda_handler(event, context):
- log = logging.getLogger()
- log.info('[lambda_handler] Start')
-
- result = {}
- try:
- # ------------------------------------------------------------------
- # Set Log Level
- # ------------------------------------------------------------------
- log_level = str(os.getenv('LOG_LEVEL').upper())
- if log_level not in ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']:
- log_level = 'ERROR'
- log.setLevel(log_level)
-
- # ----------------------------------------------------------
- # Process event
- # ----------------------------------------------------------
- log.info(event)
-
- if "resourceType" in event:
- process_athena_scheduler_event(log, event)
- result['message'] = "[lambda_handler] Athena scheduler event processed."
- log.info(result['message'])
-
- elif 'Records' in event:
- for r in event['Records']:
- bucket_name = r['s3']['bucket']['name']
- key_name = unquote_plus(r['s3']['object']['key'])
-
- if 'APP_ACCESS_LOG_BUCKET' in environ and bucket_name == os.getenv('APP_ACCESS_LOG_BUCKET'):
- if key_name.startswith('athena_results/'):
- process_athena_result(log, bucket_name, key_name, scanners)
- result['message'] = "[lambda_handler] Athena app log query result processed."
- log.info(result['message'])
-
- else:
- conf_filename = os.getenv('STACK_NAME') + '-app_log_conf.json'
- output_filename = os.getenv('STACK_NAME') + '-app_log_out.json'
- log_type = os.getenv('LOG_TYPE')
- process_log_file(log, bucket_name, key_name, conf_filename, output_filename, log_type, scanners)
- result['message'] = "[lambda_handler] App access log file processed."
- log.info(result['message'])
-
- elif 'WAF_ACCESS_LOG_BUCKET' in environ and bucket_name == os.getenv('WAF_ACCESS_LOG_BUCKET'):
- if key_name.startswith('athena_results/'):
- process_athena_result(log, bucket_name, key_name, flood)
- result['message'] = "[lambda_handler] Athena AWS WAF log query result processed."
- log.info(result['message'])
-
- else:
- conf_filename = os.getenv('STACK_NAME') + '-waf_log_conf.json'
- output_filename = os.getenv('STACK_NAME') + '-waf_log_out.json'
- log_type = 'waf'
- process_log_file(log, bucket_name, key_name, conf_filename, output_filename, log_type, flood)
- result['message'] = "[lambda_handler] AWS WAF access log file processed."
- log.info(result['message'])
-
- else:
- result['message'] = "[lambda_handler] undefined handler for bucket %s" % bucket_name
- log.info(result['message'])
-
- send_anonymous_usage_data(log)
-
- else:
- result['message'] = "[lambda_handler] undefined handler for this type of event"
- log.info(result['message'])
-
- except Exception as error:
- log.error(str(error))
- raise
-
- log.info('[lambda_handler] End')
- return result
diff --git a/source/log_parser/log_parser.py b/source/log_parser/log_parser.py
new file mode 100644
index 00000000..eca388de
--- /dev/null
+++ b/source/log_parser/log_parser.py
@@ -0,0 +1,230 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
+import os
+from os import environ
+from urllib.parse import unquote_plus
+from lib.waflibv2 import WAFLIBv2
+from lib.solution_metrics import send_metrics
+from lib.cw_metrics_util import WAFCloudWatchMetrics
+from lib.logging_util import set_log_level
+from lambda_log_parser import LambdaLogParser
+from athena_log_parser import AthenaLogParser
+
+scope = os.getenv('SCOPE')
+scanners = 1
+flood = 2
+CW_METRIC_PERIOD_SECONDS = 300 # 5 minutes in seconds
+
+
+def initialize_usage_data():
+ usage_data = {
+ "data_type": "log_parser",
+ "scanners_probes_set_size": 0,
+ "http_flood_set_size": 0,
+ "allowed_requests": 0,
+ "blocked_requests_all": 0,
+ "blocked_requests_scanners_probes": 0,
+ "blocked_requests_http_flood": 0,
+ "allowed_requests_WAFWebACL": 0,
+ "blocked_requests_WAFWebACL": 0,
+ "waf_type": os.getenv('LOG_TYPE'),
+ "provisioner": os.getenv('provisioner') if "provisioner" in environ else "cfn"
+ }
+ return usage_data
+
+
+def get_log_parser_usage_data(log, waf_rule, cw, ipv4_set_id, ipv6_set_id,
+ ipset_name_v4, ipset_arn_v4, ipset_name_v6,
+ ipset_arn_v6, usage_data, usage_data_ip_set_field,
+ usage_data_blocked_request_field):
+ log.info("[get_log_parser_usage_data] Get %s data", waf_rule)
+
+ if ipv4_set_id in environ or ipv6_set_id in environ:
+ # Get the count of ipv4 and ipv6
+ waflib = WAFLIBv2()
+ ipv4_count = waflib.get_ip_address_count(log, scope, ipset_name_v4, ipset_arn_v4)
+ ipv6_count = waflib.get_ip_address_count(log, scope, ipset_name_v6, ipset_arn_v6)
+ usage_data[usage_data_ip_set_field] = str(ipv4_count + ipv6_count)
+
+ # Get the count of blocked requests for the bad bot rule from cloudwatch metrics
+ usage_data = cw.add_waf_cw_metric_to_usage_data(
+ 'BlockedRequests',
+ CW_METRIC_PERIOD_SECONDS,
+ os.getenv('METRIC_NAME_PREFIX') + waf_rule,
+ usage_data,
+ usage_data_blocked_request_field,
+ 0
+ )
+ return usage_data
+
+
+def send_anonymous_usage_data(log):
+ try:
+ if 'SEND_ANONYMOUS_USAGE_DATA' not in environ or os.getenv('SEND_ANONYMOUS_USAGE_DATA').lower() != 'yes':
+ return
+
+ log.info("[send_anonymous_usage_data] Start")
+
+ cw = WAFCloudWatchMetrics(log)
+ usage_data = initialize_usage_data()
+
+ # Get the count of allowed requests for all the waf rules from cloudwatch metrics
+ usage_data = cw.add_waf_cw_metric_to_usage_data(
+ 'AllowedRequests',
+ CW_METRIC_PERIOD_SECONDS,
+ 'ALL',
+ usage_data,
+ 'allowed_requests',
+ 0
+ )
+
+ # Get the count of blocked requests for all the waf rules from cloudwatch metrics
+ usage_data = cw.add_waf_cw_metric_to_usage_data(
+ 'BlockedRequests',
+ CW_METRIC_PERIOD_SECONDS,
+ 'ALL',
+ usage_data,
+ 'blocked_requests_all',
+ 0
+ )
+
+ # Get scanners probes rule specific usage data
+ get_log_parser_usage_data(
+ log, 'ScannersProbesRule', cw,
+ 'IP_SET_ID_SCANNERS_PROBESV4',
+ 'IP_SET_ID_SCANNERS_PROBESV6',
+ os.getenv('IP_SET_NAME_SCANNERS_PROBESV4'),
+ os.getenv('IP_SET_ID_SCANNERS_PROBESV4'),
+ os.getenv('IP_SET_NAME_SCANNERS_PROBESV6'),
+ os.getenv('IP_SET_ID_SCANNERS_PROBESV6'),
+ usage_data, 'scanners_probes_set_size',
+ 'blocked_requests_scanners_probes'
+ )
+
+ # Get HTTP flood rule specific usage data
+ get_log_parser_usage_data(
+ log, 'HttpFloodRegularRule', cw,
+ 'IP_SET_ID_HTTP_FLOODV4',
+ 'IP_SET_ID_HTTP_FLOODV6',
+ os.getenv('IP_SET_NAME_HTTP_FLOODV4'),
+ os.getenv('IP_SET_ID_HTTP_FLOODV4'),
+ os.getenv('IP_SET_NAME_HTTP_FLOODV6'),
+ os.getenv('IP_SET_ID_HTTP_FLOODV6'),
+ usage_data, 'http_flood_set_size',
+ 'blocked_requests_http_flood'
+ )
+
+ # Get the count of allowed requests for the web acl from cloudwatch metrics
+ usage_data = cw.add_waf_cw_metric_to_usage_data(
+ 'AllowedRequests',
+ CW_METRIC_PERIOD_SECONDS,
+ os.getenv('METRIC_NAME_PREFIX') + 'WAFWebACL',
+ usage_data,
+ 'allowed_requests_WAFWebACL',
+ 0
+ )
+
+ # Get the count of allowed requests for the web acl from cloudwatch metrics
+ usage_data = cw.add_waf_cw_metric_to_usage_data(
+ 'BlockedRequests',
+ CW_METRIC_PERIOD_SECONDS,
+ os.getenv('METRIC_NAME_PREFIX') + 'WAFWebACL',
+ usage_data,
+ 'blocked_requests_WAFWebACL',
+ 0
+ )
+
+ # Send usage data
+ log.info('[send_anonymous_usage_data] Send usage data: \n{}'.format(usage_data))
+ response = send_metrics(data=usage_data)
+ response_code = response.status_code
+ log.info('[send_anonymous_usage_data] Response Code: {}'.format(response_code))
+ log.info("[send_anonymous_usage_data] End")
+
+ except Exception as error:
+ log.info("[send_anonymous_usage_data] Failed to send data")
+ log.error(str(error))
+
+
+# ======================================================================================================================
+# Lambda Entry Point
+# ======================================================================================================================
+def lambda_handler(event, _):
+ log = set_log_level()
+ log.info('[lambda_handler] Start')
+
+ result = {}
+ try:
+ # ----------------------------------------------------------
+ # Process event
+ # ----------------------------------------------------------
+ log.info(event)
+
+ athena_log_parser = AthenaLogParser(log)
+
+ if "resourceType" in event:
+ athena_log_parser.process_athena_scheduler_event( event)
+ result['message'] = "[lambda_handler] Athena scheduler event processed."
+ log.info(result['message'])
+
+ elif 'Records' in event:
+ lambda_log_parser = LambdaLogParser(log)
+ for r in event['Records']:
+ bucket_name = r['s3']['bucket']['name']
+ key_name = unquote_plus(r['s3']['object']['key'])
+
+ if 'APP_ACCESS_LOG_BUCKET' in environ and bucket_name == os.getenv('APP_ACCESS_LOG_BUCKET'):
+ if key_name.startswith('athena_results/'):
+ athena_log_parser.process_athena_result(bucket_name, key_name, scanners)
+ result['message'] = "[lambda_handler] Athena app log query result processed."
+ log.info(result['message'])
+
+ else:
+ conf_filename = os.getenv('STACK_NAME') + '-app_log_conf.json'
+ output_filename = os.getenv('STACK_NAME') + '-app_log_out.json'
+ log_type = os.getenv('LOG_TYPE')
+ lambda_log_parser.process_log_file(bucket_name, key_name, conf_filename, output_filename, log_type, scanners)
+ result['message'] = "[lambda_handler] App access log file processed."
+ log.info(result['message'])
+
+ elif 'WAF_ACCESS_LOG_BUCKET' in environ and bucket_name == os.getenv('WAF_ACCESS_LOG_BUCKET'):
+ if key_name.startswith('athena_results/'):
+ athena_log_parser.process_athena_result(bucket_name, key_name, flood)
+ result['message'] = "[lambda_handler] Athena AWS WAF log query result processed."
+ log.info(result['message'])
+
+ else:
+ conf_filename = os.getenv('STACK_NAME') + '-waf_log_conf.json'
+ output_filename = os.getenv('STACK_NAME') + '-waf_log_out.json'
+ log_type = 'waf'
+ lambda_log_parser.process_log_file(bucket_name, key_name, conf_filename, output_filename, log_type, flood)
+ result['message'] = "[lambda_handler] AWS WAF access log file processed."
+ log.info(result['message'])
+
+ else:
+ result['message'] = "[lambda_handler] undefined handler for bucket %s" % bucket_name
+ log.info(result['message'])
+
+ send_anonymous_usage_data(log)
+
+ else:
+ result['message'] = "[lambda_handler] undefined handler for this type of event"
+ log.info(result['message'])
+
+ except Exception as error:
+ log.error(str(error))
+ raise
+
+ log.info('[lambda_handler] End')
+ return result
diff --git a/source/log_parser/partition_s3_logs.py b/source/log_parser/partition_s3_logs.py
index 1d02ac69..4faec395 100644
--- a/source/log_parser/partition_s3_logs.py
+++ b/source/log_parser/partition_s3_logs.py
@@ -1,5 +1,5 @@
##############################################################################
-# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
# #
# Licensed under the Apache License, Version 2.0 (the "License"). #
# You may not use this file except in compliance #
@@ -13,15 +13,12 @@
# governing permissions and limitations under the License. #
##############################################################################
-
-import boto3
import re
-import logging
from os import environ
-from botocore.config import Config
from lib.boto3_util import create_client
+from lib.logging_util import set_log_level
-def lambda_handler(event, context):
+def lambda_handler(event, _):
"""
This function is triggered by S3 event to move log files
(upon their arrival in s3) from their original location
@@ -33,25 +30,17 @@ def lambda_handler(event, context):
AWSLogs-Partitioned/year=2020/month=04/day=09/hour=23/
"""
- logging.getLogger().debug('[partition_s3_logs lambda_handler] Start')
+ log = set_log_level()
+ log.debug('[partition_s3_logs lambda_handler] Start')
try:
- # ---------------------------------------------------------
- # Set Log Level
- # ---------------------------------------------------------
- global log_level
- log_level = str(environ['LOG_LEVEL'].upper())
- if log_level not in ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']:
- log_level = 'ERROR'
- logging.getLogger().setLevel(log_level)
-
# ----------------------------------------------------------
# Process event
# ----------------------------------------------------------
- logging.getLogger().info(event)
+ log.info(event)
keep_original_data = str(environ['KEEP_ORIGINAL_DATA'].upper())
endpoint = str(environ['ENDPOINT'].upper())
- logging.getLogger().info("\n[partition_s3_logs lambda_handler] KEEP ORIGINAL DATA: %s; End POINT: %s."
+ log.info("\n[partition_s3_logs lambda_handler] KEEP ORIGINAL DATA: %s; End POINT: %s."
%(keep_original_data, endpoint))
s3 = create_client('s3')
@@ -81,25 +70,25 @@ def lambda_handler(event, context):
source_path = bucket + '/' + key
dest_path = bucket + '/' + dest
- # Copy S3 object to destionation
+ # Copy S3 object to destination
s3.copy_object(Bucket=bucket, Key=dest, CopySource=source_path)
- logging.getLogger().info("\n[partition_s3_logs lambda_handler] Copied file %s to destination %s"%(source_path, dest_path))
+ log.info("\n[partition_s3_logs lambda_handler] Copied file %s to destination %s"%(source_path, dest_path))
# Only delete source S3 object from its original folder if keeping original data is no
if keep_original_data == 'NO':
s3.delete_object(Bucket=bucket, Key=key)
- logging.getLogger().info("\n[partition_s3_logs lambda_handler] Removed file %s"%source_path)
+ log.info("\n[partition_s3_logs lambda_handler] Removed file %s"%source_path)
count = count + 1
- logging.getLogger().info("\n[partition_s3_logs lambda_handler] Successfully partitioned %s file(s)."%(str(count)))
+ log.info("\n[partition_s3_logs lambda_handler] Successfully partitioned %s file(s)."%(str(count)))
except Exception as error:
- logging.getLogger().error(str(error))
+ log.error(str(error))
raise
- logging.getLogger().debug('[partition_s3_logs lambda_handler] End')
+ log.debug('[partition_s3_logs lambda_handler] End')
def parse_cloudfront_logs(key, filename):
diff --git a/source/log_parser/requirements.txt b/source/log_parser/requirements.txt
index 4fcd31fc..d1c440f5 100644
--- a/source/log_parser/requirements.txt
+++ b/source/log_parser/requirements.txt
@@ -1,2 +1,2 @@
-backoff>=2.2.1
-requests>=2.28.2
\ No newline at end of file
+backoff~=2.2.1
+requests~=2.28.2
\ No newline at end of file
diff --git a/source/log_parser/requirements_dev.txt b/source/log_parser/requirements_dev.txt
new file mode 100644
index 00000000..ab317bdd
--- /dev/null
+++ b/source/log_parser/requirements_dev.txt
@@ -0,0 +1,11 @@
+botocore~=1.29.85
+boto3~=1.26.85
+mock~=5.0.1
+moto~=4.1.4
+pytest~=7.2.2
+pytest-mock~=3.10.0
+pytest-runner~=6.0.0
+freezegun~=1.2.2
+pytest-cov~=4.0.0
+pytest-env~=0.8.1
+pyparsing~=3.0.9
\ No newline at end of file
diff --git a/source/log_parser/test/conftest.py b/source/log_parser/test/conftest.py
new file mode 100644
index 00000000..68f295c8
--- /dev/null
+++ b/source/log_parser/test/conftest.py
@@ -0,0 +1,329 @@
+###############################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). #
+# You may not use this file except in compliance with the License.
+# A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express #
+# or implied. See the License for the specific language governing permissions#
+# and limitations under the License. #
+###############################################################################
+
+import boto3
+import pytest
+from os import environ
+from moto import mock_s3, mock_glue, mock_athena, mock_wafv2
+
+
+S3_BUCKET_NAME = "test_bucket"
+GLUE_DATABASE_NAME = "test_database"
+GLUE_TABLE_NAME = "test_table"
+ATHENA_WORK_GROUP_NAME = "test_work_group"
+ATHENA_QUERY_OUTPUT_LOCATION = "s3://%s/athena_results/" %S3_BUCKET_NAME
+REGION = "us-east-1"
+
+# local file paths
+ATHENA_QUERY_RESULT_FILE_LOCAL_PATH = "./test/test_data/test_athena_query_result.csv"
+CLOUDFRONT_LOG_FILE_LOCAL_PATH = "./test/test_data/E3HXCM7PFRG6HT.2023-04-24-21.d740d76bCloudFront.gz"
+ALB_LOG_FILE_LOCAL_PATH = "./test/test_data/XXXXXXXXXXXX_elasticloadbalancing_us-east-1_app.ApplicationLoadBalancer.fa87e1db7badc175_20230424T2110Z_X.X.X.X_4c8scnzy.log.gz"
+WAF_LOG_FILE_LOCAL_PATH = "./test/test_data/test_waf_log.gz"
+APP_LOG_CONF_FILE_LOCAL_PATH = "./test/test_data/waf_stack-app_log_conf.json"
+APP_LOG_OUTPUT_FILE_LOCAL_PATH = "./test/test_data/waf-stack-app_log_out.json"
+WAF_LOG_CONF_FILE_LOCAL_PATH = "./test/test_data/waf_stack-waf_log_conf.json"
+WAF_LOG_OUTPUT_FILE_LOCAL_PATH = "./test/test_data/waf_stack-waf_log_out.json"
+
+# remote S3 file keys
+ATHENA_QUERY_RESULT_FILE_S3_KEY = "athena_results/test_athena_query_result.csv"
+CLOUDFRONT_LOG_FILE_S3_KEY = "AWSLogs/E3HXCM7PFRG6HT.2023-04-24-21.d740d76bCloudFront.gz"
+ALB_LOG_FILE_S3_KEY = "AWSLogs/XXXXXXXXXXXX/elasticloadbalancing/us-east-1/2023/04/24/XXXXXXXXXXXX_elasticloadbalancing_us-east-1_app.ApplicationLoadBalancer.fa87e1db7badc175_20230424T2110Z_X.X.X.X_4c8scnzy.log.gz"
+WAF_LOG_FILE_S3_KEY = "AWSLogs/test_waf_log.gz"
+APP_LOG_CONF_FILE_S3_KEY = "waf_stack-app_log_conf.json"
+APP_LOG_OUTPUT_FILE_S3_KEY = "waf-stack-app_log_out.json"
+WAF_LOG_CONF_FILE_S3_KEY = "waf_stack-waf_log_conf.json"
+WAF_LOG_OUTPUT_FILE_S3_KEY = "waf_stack-waf_log_out.json"
+
+# values for triggering exception
+NON_EXISTENT_WORK_GROUP = 'non_existent_work_group'
+
+
+@pytest.fixture(scope='module', autouse=True)
+def test_aws_credentials_setup():
+ """Mocked AWS Credentials for moto"""
+ environ['AWS_ACCESS_KEY_ID'] = 'testing'
+ environ['AWS_SECRET_ACCESS_KEY'] = 'testing'
+ environ['AWS_SECURITY_TOKEN'] = 'testing'
+ environ['AWS_SESSION_TOKEN'] = 'testing'
+ environ['AWS_DEFAULT_REGION'] = 'us-east-1'
+ environ['AWS_REGION'] = 'us-east-1'
+
+
+@pytest.fixture(scope='module', autouse=True)
+def test_environment_vars_setup():
+ """Athena Mock Client"""
+ environ['WAF_BLOCK_PERIOD'] = '240'
+ environ['ERROR_THRESHOLD'] = '100'
+ environ['REQUEST_THRESHOLD'] = '100'
+ environ['REQUEST_THRESHOLD_BY_COUNTRY'] = ''
+ environ['HTTP_FLOOD_ATHENA_GROUP_BY'] = 'None'
+ environ['ATHENA_QUERY_RUN_SCHEDULE'] = '5'
+ environ['STACK_NAME'] = 'waf_stack'
+ environ['METRIC_NAME_PREFIX'] = 'waf_stack'
+ environ['MAX_AGE_TO_UPDATE'] = '30'
+ environ['LOG_LEVEL'] = 'INFO'
+ environ['SEND_ANONYMOUS_USAGE_DATA'] = 'Yes'
+ environ['UUID'] = 'test_uuid'
+ environ['SOLUTION_ID'] = 'SO0006'
+ environ['METRICS_URL'] = 'https://testurl.com/generic'
+
+
+@pytest.fixture(scope='module')
+def s3_client():
+ with mock_s3():
+ connection = boto3.client("s3", region_name=REGION)
+ yield connection
+
+
+@pytest.fixture(scope='module')
+def s3_resource():
+ with mock_s3():
+ connection = boto3.resource("s3", region_name=REGION)
+ yield connection
+
+
+@pytest.fixture(scope='module')
+def glue_client():
+ with mock_glue():
+ connection = boto3.client("glue", region_name=REGION)
+ yield connection
+
+
+@pytest.fixture(scope='module')
+def athena_client():
+ """Athena Mock Client"""
+ with mock_athena():
+ connection = boto3.client("athena", region_name=REGION)
+ yield connection
+
+
+@pytest.fixture(scope='module', autouse=True)
+def s3_resources_setup(s3_client):
+ conn = s3_client
+ conn.create_bucket(Bucket=S3_BUCKET_NAME)
+ conn.upload_file(ATHENA_QUERY_RESULT_FILE_LOCAL_PATH, S3_BUCKET_NAME, ATHENA_QUERY_RESULT_FILE_S3_KEY)
+ conn.upload_file(CLOUDFRONT_LOG_FILE_LOCAL_PATH, S3_BUCKET_NAME, CLOUDFRONT_LOG_FILE_S3_KEY)
+ conn.upload_file(ALB_LOG_FILE_LOCAL_PATH, S3_BUCKET_NAME, ALB_LOG_FILE_S3_KEY)
+ conn.upload_file(WAF_LOG_FILE_LOCAL_PATH, S3_BUCKET_NAME, WAF_LOG_FILE_S3_KEY)
+ conn.upload_file(APP_LOG_CONF_FILE_LOCAL_PATH, S3_BUCKET_NAME, APP_LOG_CONF_FILE_S3_KEY)
+ conn.upload_file(APP_LOG_OUTPUT_FILE_LOCAL_PATH, S3_BUCKET_NAME, APP_LOG_OUTPUT_FILE_S3_KEY)
+ conn.upload_file(WAF_LOG_CONF_FILE_LOCAL_PATH, S3_BUCKET_NAME, WAF_LOG_CONF_FILE_S3_KEY)
+ conn.upload_file(WAF_LOG_OUTPUT_FILE_LOCAL_PATH, S3_BUCKET_NAME, WAF_LOG_OUTPUT_FILE_S3_KEY)
+
+
+@pytest.fixture(scope='module', autouse=True)
+def glue_resources_setup(glue_client):
+ conn = glue_client
+ conn.create_database(DatabaseInput={"Name": GLUE_DATABASE_NAME})
+ conn.create_table(DatabaseName=GLUE_DATABASE_NAME, TableInput={"Name": GLUE_TABLE_NAME})
+
+
+@pytest.fixture(scope='module', autouse=True)
+def athena_resources_setup(athena_client):
+ conn = athena_client
+ conn.create_work_group(
+ Name=ATHENA_WORK_GROUP_NAME,
+ Configuration={
+ 'ResultConfiguration': {
+ 'OutputLocation': ATHENA_QUERY_OUTPUT_LOCATION
+ }
+ }
+ )
+
+
+@pytest.fixture(scope='function')
+def app_log_athena_parser_test_event_setup():
+ event = {
+ "resourceType": "LambdaAthenaAppLogParser",
+ "glueAccessLogsDatabase": GLUE_DATABASE_NAME,
+ "accessLogBucket": S3_BUCKET_NAME,
+ "glueAppAccessLogsTable": GLUE_TABLE_NAME,
+ "athenaWorkGroup": ATHENA_WORK_GROUP_NAME
+ }
+ return event
+
+
+@pytest.fixture(scope='function')
+def waf_log_athena_parser_test_event_setup():
+ event = {
+ "resourceType": "LambdaAthenaWAFLogParser",
+ "glueAccessLogsDatabase": GLUE_DATABASE_NAME,
+ "accessLogBucket": S3_BUCKET_NAME,
+ "glueWafAccessLogsTable": GLUE_TABLE_NAME,
+ "athenaWorkGroup": ATHENA_WORK_GROUP_NAME
+ }
+ return event
+
+
+@pytest.fixture(scope='function')
+def app_log_athena_query_result_test_event_setup():
+ environ['APP_ACCESS_LOG_BUCKET'] = S3_BUCKET_NAME
+ event = {
+ "Records": [{
+ "s3": {
+ "bucket": {
+ "name": S3_BUCKET_NAME
+ },
+ "object": {
+ "key": ATHENA_QUERY_RESULT_FILE_S3_KEY
+ }
+ }
+ }]
+ }
+ return event
+
+
+@pytest.fixture(scope='function')
+def waf_log_athena_query_result_test_event_setup():
+ environ['WAF_ACCESS_LOG_BUCKET'] = S3_BUCKET_NAME
+ event = {
+ "Records": [{
+ "s3": {
+ "bucket": {
+ "name": S3_BUCKET_NAME
+ },
+ "object": {
+ "key": ATHENA_QUERY_RESULT_FILE_S3_KEY
+ }
+ }
+ }]
+ }
+ return event
+
+
+@pytest.fixture(scope='function')
+def cloudfront_log_lambda_parser_test_event_setup():
+ environ['APP_ACCESS_LOG_BUCKET'] = S3_BUCKET_NAME
+ event = {
+ "Records": [{
+ "s3": {
+ "bucket": {
+ "name": S3_BUCKET_NAME
+ },
+ "object": {
+ "key": CLOUDFRONT_LOG_FILE_S3_KEY
+ }
+ }
+ }]
+ }
+ return event
+
+
+@pytest.fixture(scope='function')
+def alb_log_lambda_parser_test_event_setup():
+ environ['APP_ACCESS_LOG_BUCKET'] = S3_BUCKET_NAME
+ environ['IP_SET_NAME_SCANNERS_PROBESV4'] = 'scanner_probes_ip_set_name_v4'
+ environ['IP_SET_ID_SCANNERS_PROBESV4'] = 'scanner_probes_ip_set_id_v4'
+ environ['IP_SET_NAME_SCANNERS_PROBESV6'] = 'scanner_probes_ip_set_name_v6'
+ environ['IP_SET_ID_SCANNERS_PROBESV6'] = 'scanner_probes_ip_set_id_v6'
+ event = {
+ "Records": [{
+ "s3": {
+ "bucket": {
+ "name": S3_BUCKET_NAME
+ },
+ "object": {
+ "key": ALB_LOG_FILE_S3_KEY
+ }
+ }
+ }]
+ }
+ return event
+
+
+@pytest.fixture(scope='function')
+def waf_log_lambda_parser_test_event_setup():
+ environ['WAF_ACCESS_LOG_BUCKET'] = S3_BUCKET_NAME
+ environ['IP_SET_NAME_HTTP_FLOODV4'] = 'http_flood_ip_set_name_v4'
+ environ['IP_SET_ID_HTTP_FLOODV4'] = 'http_flood_ip_set_id_v4'
+ environ['IP_SET_NAME_HTTP_FLOODV6'] = 'http_flood_ip_set_name_v6'
+ environ['IP_SET_ID_HTTP_FLOODV6'] = 'http_flood_ip_set_id_v6'
+ event = {
+ "Records": [{
+ "s3": {
+ "bucket": {
+ "name": S3_BUCKET_NAME
+ },
+ "object": {
+ "key": WAF_LOG_FILE_S3_KEY
+ }
+ }
+ }]
+ }
+ return event
+
+
+@pytest.fixture(scope='function')
+def athena_partitions_test_event_setup():
+ event = {
+ "accessLogBucket": S3_BUCKET_NAME,
+ "wafLogBucket": S3_BUCKET_NAME,
+ "glueAccessLogsDatabase": GLUE_DATABASE_NAME,
+ "glueAppAccessLogsTable": GLUE_TABLE_NAME,
+ "glueWafAccessLogsTable": GLUE_TABLE_NAME,
+ "athenaWorkGroup": ATHENA_WORK_GROUP_NAME
+ }
+ return event
+
+
+@pytest.fixture(scope='function')
+def athena_partitions_non_existent_work_group_test_event_setup():
+ event = {
+ "accessLogBucket": S3_BUCKET_NAME,
+ "wafLogBucket": S3_BUCKET_NAME,
+ "glueAccessLogsDatabase": GLUE_DATABASE_NAME,
+ "glueAppAccessLogsTable": GLUE_TABLE_NAME,
+ "glueWafAccessLogsTable": GLUE_TABLE_NAME,
+ "athenaWorkGroup": NON_EXISTENT_WORK_GROUP
+ }
+ return event
+
+
+@pytest.fixture(scope='function')
+def partition_s3_cloudfront_log_test_event_setup():
+ environ['KEEP_ORIGINAL_DATA'] = 'No'
+ environ['ENDPOINT'] = 'CloudFront'
+ event = {
+ "Records": [{
+ "s3": {
+ "bucket": {
+ "name": S3_BUCKET_NAME
+ },
+ "object": {
+ "key": CLOUDFRONT_LOG_FILE_S3_KEY
+ }
+ }
+ }]
+ }
+ return event
+
+
+@pytest.fixture(scope='function')
+def partition_s3_alb_log_test_event_setup():
+ environ['KEEP_ORIGINAL_DATA'] = 'No'
+ environ['ENDPOINT'] = 'ALB'
+ event = {
+ "Records": [{
+ "s3": {
+ "bucket": {
+ "name": S3_BUCKET_NAME
+ },
+ "object": {
+ "key": ALB_LOG_FILE_S3_KEY
+ }
+ }
+ }]
+ }
+ return event
\ No newline at end of file
diff --git a/source/log_parser/test/test_add_athena_partitions.py b/source/log_parser/test/test_add_athena_partitions.py
new file mode 100644
index 00000000..24148bd1
--- /dev/null
+++ b/source/log_parser/test/test_add_athena_partitions.py
@@ -0,0 +1,36 @@
+###############################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). #
+# You may not use this file except in compliance with the License.
+# A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express #
+# or implied. See the License for the specific language governing permissions#
+# and limitations under the License. #
+###############################################################################
+
+from add_athena_partitions import lambda_handler
+
+def test_add_athena_partitions(athena_partitions_test_event_setup):
+ try:
+ event = athena_partitions_test_event_setup
+ result = False
+ lambda_handler(event, {})
+ result = True
+ except Exception:
+ raise
+ assert result == True
+
+
+def test_add_athena_partitions(athena_partitions_non_existent_work_group_test_event_setup):
+ try:
+ event = athena_partitions_non_existent_work_group_test_event_setup
+ result = False
+ lambda_handler(event, {})
+ result = True
+ except Exception:
+ assert result == False
diff --git a/source/log_parser/test/test_build_athena_queries.py b/source/log_parser/test/test_build_athena_queries.py
index 23904271..888ba342 100644
--- a/source/log_parser/test/test_build_athena_queries.py
+++ b/source/log_parser/test/test_build_athena_queries.py
@@ -1,19 +1,16 @@
-##############################################################################
-# Copyright 2020 Amazon.com, Inc. and its affiliates. All Rights Reserved.
-# #
-# Licensed under the Amazon Software License (the "License"). You may not #
-# use this file except in compliance with the License. A copy of the #
-# License is located at #
-# #
-# http://aws.amazon.com/asl/ #
-# #
-# or in the "license" file accompanying this file. This file is distributed #
-# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, #
-# express or implied. See the License for the specific language governing #
-# permissions and limitations under the License. #
-##############################################################################
-
-import sys
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
import datetime
import logging
import build_athena_queries, add_athena_partitions
@@ -29,6 +26,13 @@
waf_block_period = 240
error_threshold = 2000
request_threshold = 50
+request_threshold_by_country = '{"TR":30,"CN":100,"SE":150}'
+no_request_threshold_by_country = ''
+group_by_country = 'country'
+group_by_uri = 'uri'
+group_by_country_uri = 'country and uri'
+no_group_by = 'none'
+athena_query_run_schedule = 5
cloudfront_log_type = 'CLOUDFRONT'
alb_log_type = 'ALB'
waf_log_type = 'WAF'
@@ -57,16 +61,109 @@ def test_build_athena_queries_for_alb_logs():
assert query_string == alb_logs_query
-def test_build_athena_queries_for_waf_logs():
+def test_build_athena_queries_for_waf_logs_one():
+ # test original waf log query one - no group by; no threshold by country
+ query_string = build_athena_queries.build_athena_query_for_waf_logs(
+ log, database_name, table_name,end_timestamp, waf_block_period,
+ request_threshold, no_request_threshold_by_country, no_group_by,
+ athena_query_run_schedule
+ )
+
+ with open('./test/test_data/waf_logs_query_1.txt', 'r') as file:
+ waf_logs_query = file.read()
+ assert type(query_string) is str
+ assert query_string == waf_logs_query
+
+def test_build_athena_queries_for_waf_logs_two():
+ # test waf log query two - group by country; no threshold by country
+ query_string = build_athena_queries.build_athena_query_for_waf_logs(
+ log, database_name, table_name,end_timestamp, waf_block_period,
+ request_threshold, no_request_threshold_by_country, group_by_country,
+ athena_query_run_schedule
+ )
+
+ with open('./test/test_data/waf_logs_query_2.txt', 'r') as file:
+ waf_logs_query = file.read()
+ assert type(query_string) is str
+ assert query_string == waf_logs_query
+
+def test_build_athena_queries_for_waf_logs_three():
+ # test waf log query three - group by uri; no threshold by country
+ query_string = build_athena_queries.build_athena_query_for_waf_logs(
+ log, database_name, table_name,end_timestamp, waf_block_period,
+ request_threshold, no_request_threshold_by_country, group_by_uri,
+ athena_query_run_schedule
+ )
+
+ with open('./test/test_data/waf_logs_query_3.txt', 'r') as file:
+ waf_logs_query = file.read()
+ assert type(query_string) is str
+ assert query_string == waf_logs_query
+
+def test_build_athena_queries_for_waf_logs_four():
+ # test waf log query four - group by country and uri; no threshold by country
+ query_string = build_athena_queries.build_athena_query_for_waf_logs(
+ log, database_name, table_name,end_timestamp, waf_block_period,
+ request_threshold, no_request_threshold_by_country, group_by_country_uri,
+ athena_query_run_schedule
+ )
+
+ with open('./test/test_data/waf_logs_query_4.txt', 'r') as file:
+ waf_logs_query = file.read()
+ assert type(query_string) is str
+ assert query_string == waf_logs_query
+
+def test_build_athena_queries_for_waf_logs_five():
+ # test waf log query five - no group by; has threshold by country
query_string = build_athena_queries.build_athena_query_for_waf_logs(
- log, database_name, table_name,
- end_timestamp, waf_block_period, request_threshold)
+ log, database_name, table_name,end_timestamp, waf_block_period,
+ request_threshold, request_threshold_by_country, no_group_by,
+ athena_query_run_schedule
+ )
- with open('./test/test_data/waf_logs_query.txt', 'r') as file:
+ with open('./test/test_data/waf_logs_query_5.txt', 'r') as file:
waf_logs_query = file.read()
assert type(query_string) is str
assert query_string == waf_logs_query
+def test_build_athena_queries_for_waf_logs_six():
+ # test waf log query six - group by country; has threshold by country
+ query_string = build_athena_queries.build_athena_query_for_waf_logs(
+ log, database_name, table_name,end_timestamp, waf_block_period,
+ request_threshold, request_threshold_by_country, group_by_country,
+ athena_query_run_schedule
+ )
+
+ with open('./test/test_data/waf_logs_query_5.txt', 'r') as file:
+ waf_logs_query = file.read()
+ assert type(query_string) is str
+ assert query_string == waf_logs_query
+
+def test_build_athena_queries_for_waf_logs_seven():
+ # test waf log query seven - group by uri; has threshold by country
+ query_string = build_athena_queries.build_athena_query_for_waf_logs(
+ log, database_name, table_name,end_timestamp, waf_block_period,
+ request_threshold, request_threshold_by_country, group_by_uri,
+ athena_query_run_schedule
+ )
+
+ with open('./test/test_data/waf_logs_query_6.txt', 'r') as file:
+ waf_logs_query = file.read()
+ assert type(query_string) is str
+ assert query_string == waf_logs_query
+
+def test_build_athena_queries_for_waf_logs_eight():
+ # test waf log query eight - group by country and uri; has threshold by country
+ query_string = build_athena_queries.build_athena_query_for_waf_logs(
+ log, database_name, table_name,end_timestamp, waf_block_period,
+ request_threshold, request_threshold_by_country, group_by_country_uri,
+ athena_query_run_schedule
+ )
+
+ with open('./test/test_data/waf_logs_query_6.txt', 'r') as file:
+ waf_logs_query = file.read()
+ assert type(query_string) is str
+ assert query_string == waf_logs_query
@freeze_time("2020-05-08 02:21:34", tz_offset=-4)
def test_add_athena_partitions_build_query_string():
diff --git a/source/log_parser/test/test_data/E3HXCM7PFRG6HT.2023-04-24-21.d740d76bCloudFront.gz b/source/log_parser/test/test_data/E3HXCM7PFRG6HT.2023-04-24-21.d740d76bCloudFront.gz
new file mode 100644
index 00000000..505ab076
Binary files /dev/null and b/source/log_parser/test/test_data/E3HXCM7PFRG6HT.2023-04-24-21.d740d76bCloudFront.gz differ
diff --git a/source/log_parser/test/test_data/XXXXXXXXXXXX_elasticloadbalancing_us-east-1_app.ApplicationLoadBalancer.fa87e1db7badc175_20230424T2110Z_X.X.X.X_4c8scnzy.log.gz b/source/log_parser/test/test_data/XXXXXXXXXXXX_elasticloadbalancing_us-east-1_app.ApplicationLoadBalancer.fa87e1db7badc175_20230424T2110Z_X.X.X.X_4c8scnzy.log.gz
new file mode 100644
index 00000000..fa642b10
Binary files /dev/null and b/source/log_parser/test/test_data/XXXXXXXXXXXX_elasticloadbalancing_us-east-1_app.ApplicationLoadBalancer.fa87e1db7badc175_20230424T2110Z_X.X.X.X_4c8scnzy.log.gz differ
diff --git a/source/log_parser/test/test_data/cf-access-log-sample.gz b/source/log_parser/test/test_data/cf-access-log-sample.gz
new file mode 100755
index 00000000..f27d9311
Binary files /dev/null and b/source/log_parser/test/test_data/cf-access-log-sample.gz differ
diff --git a/source/log_parser/test/test_data/test_athena_query_result.csv b/source/log_parser/test/test_data/test_athena_query_result.csv
new file mode 100644
index 00000000..acf6a06a
--- /dev/null
+++ b/source/log_parser/test/test_data/test_athena_query_result.csv
@@ -0,0 +1,2 @@
+"client_ip","max_counter_per_min"
+"10.x.x.x","2798"
\ No newline at end of file
diff --git a/source/log_parser/test/test_data/test_waf_log.gz b/source/log_parser/test/test_data/test_waf_log.gz
new file mode 100644
index 00000000..40ac5efe
Binary files /dev/null and b/source/log_parser/test/test_data/test_waf_log.gz differ
diff --git a/source/log_parser/test/test_data/waf-stack-app_log_out.json b/source/log_parser/test/test_data/waf-stack-app_log_out.json
new file mode 100644
index 00000000..69221d29
--- /dev/null
+++ b/source/log_parser/test/test_data/waf-stack-app_log_out.json
@@ -0,0 +1 @@
+{"general": {"x.x.0.0": {"max_counter_per_min": 100, "updated_at": "2023-04-21 22:57:59 UTC+0000"}}, "uriList": {}}
\ No newline at end of file
diff --git a/source/log_parser/test/test_data/waf_logs_query.txt b/source/log_parser/test/test_data/waf_logs_query_1.txt
similarity index 100%
rename from source/log_parser/test/test_data/waf_logs_query.txt
rename to source/log_parser/test/test_data/waf_logs_query_1.txt
diff --git a/source/log_parser/test/test_data/waf_logs_query_2.txt b/source/log_parser/test/test_data/waf_logs_query_2.txt
new file mode 100644
index 00000000..f5b0575c
--- /dev/null
+++ b/source/log_parser/test/test_data/waf_logs_query_2.txt
@@ -0,0 +1,32 @@
+SELECT
+ client_ip, country,
+ MAX_BY(counter, counter) as max_counter_per_min
+ FROM (
+ WITH logs_with_concat_data AS (
+ SELECT
+ httprequest.clientip as client_ip,httprequest.country as country,
+ from_unixtime(timestamp/1000) as datetime
+ FROM
+ testdb.testtable
+ WHERE year = 2020
+ AND month = 05
+ AND day = 07
+ AND hour between 09 and 13
+ )
+ SELECT
+ client_ip, country,
+ COUNT(*) as counter
+ FROM
+ logs_with_concat_data
+ WHERE
+ datetime > TIMESTAMP '2020-05-07 09:33:00'
+ GROUP BY
+ client_ip, country,
+ date_trunc('minute', datetime)
+ HAVING
+ COUNT(*) >= 10.0
+) GROUP BY
+ client_ip, country
+ORDER BY
+ max_counter_per_min DESC
+LIMIT 10000;
\ No newline at end of file
diff --git a/source/log_parser/test/test_data/waf_logs_query_3.txt b/source/log_parser/test/test_data/waf_logs_query_3.txt
new file mode 100644
index 00000000..2f2ba0bd
--- /dev/null
+++ b/source/log_parser/test/test_data/waf_logs_query_3.txt
@@ -0,0 +1,32 @@
+SELECT
+ client_ip, uri,
+ MAX_BY(counter, counter) as max_counter_per_min
+ FROM (
+ WITH logs_with_concat_data AS (
+ SELECT
+ httprequest.clientip as client_ip,httprequest.uri as uri,
+ from_unixtime(timestamp/1000) as datetime
+ FROM
+ testdb.testtable
+ WHERE year = 2020
+ AND month = 05
+ AND day = 07
+ AND hour between 09 and 13
+ )
+ SELECT
+ client_ip, uri,
+ COUNT(*) as counter
+ FROM
+ logs_with_concat_data
+ WHERE
+ datetime > TIMESTAMP '2020-05-07 09:33:00'
+ GROUP BY
+ client_ip, uri,
+ date_trunc('minute', datetime)
+ HAVING
+ COUNT(*) >= 10.0
+) GROUP BY
+ client_ip, uri
+ORDER BY
+ max_counter_per_min DESC
+LIMIT 10000;
\ No newline at end of file
diff --git a/source/log_parser/test/test_data/waf_logs_query_4.txt b/source/log_parser/test/test_data/waf_logs_query_4.txt
new file mode 100644
index 00000000..42af5cd8
--- /dev/null
+++ b/source/log_parser/test/test_data/waf_logs_query_4.txt
@@ -0,0 +1,32 @@
+SELECT
+ client_ip, country, uri,
+ MAX_BY(counter, counter) as max_counter_per_min
+ FROM (
+ WITH logs_with_concat_data AS (
+ SELECT
+ httprequest.clientip as client_ip,httprequest.country as country, httprequest.uri as uri,
+ from_unixtime(timestamp/1000) as datetime
+ FROM
+ testdb.testtable
+ WHERE year = 2020
+ AND month = 05
+ AND day = 07
+ AND hour between 09 and 13
+ )
+ SELECT
+ client_ip, country, uri,
+ COUNT(*) as counter
+ FROM
+ logs_with_concat_data
+ WHERE
+ datetime > TIMESTAMP '2020-05-07 09:33:00'
+ GROUP BY
+ client_ip, country, uri,
+ date_trunc('minute', datetime)
+ HAVING
+ COUNT(*) >= 10.0
+) GROUP BY
+ client_ip, country, uri
+ORDER BY
+ max_counter_per_min DESC
+LIMIT 10000;
\ No newline at end of file
diff --git a/source/log_parser/test/test_data/waf_logs_query_5.txt b/source/log_parser/test/test_data/waf_logs_query_5.txt
new file mode 100644
index 00000000..eb4c78f2
--- /dev/null
+++ b/source/log_parser/test/test_data/waf_logs_query_5.txt
@@ -0,0 +1,35 @@
+SELECT
+ client_ip, country,
+ MAX_BY(counter, counter) as max_counter_per_min
+ FROM (
+ WITH logs_with_concat_data AS (
+ SELECT
+ httprequest.clientip as client_ip,httprequest.country as country,
+ from_unixtime(timestamp/1000) as datetime
+ FROM
+ testdb.testtable
+ WHERE year = 2020
+ AND month = 05
+ AND day = 07
+ AND hour between 09 and 13
+ )
+ SELECT
+ client_ip, country,
+ COUNT(*) as counter
+ FROM
+ logs_with_concat_data
+ WHERE
+ datetime > TIMESTAMP '2020-05-07 09:33:00'
+ GROUP BY
+ client_ip, country,
+ date_trunc('minute', datetime)
+ HAVING
+ (COUNT(*) >= 6.0 AND country = 'TR') OR
+ (COUNT(*) >= 20.0 AND country = 'CN') OR
+ (COUNT(*) >= 30.0 AND country = 'SE') OR
+ (COUNT(*) >= 10.0 AND country NOT IN ('TR','CN','SE'))
+) GROUP BY
+ client_ip, country
+ORDER BY
+ max_counter_per_min DESC
+LIMIT 10000;
\ No newline at end of file
diff --git a/source/log_parser/test/test_data/waf_logs_query_6.txt b/source/log_parser/test/test_data/waf_logs_query_6.txt
new file mode 100644
index 00000000..a8b5344c
--- /dev/null
+++ b/source/log_parser/test/test_data/waf_logs_query_6.txt
@@ -0,0 +1,35 @@
+SELECT
+ client_ip, country, uri,
+ MAX_BY(counter, counter) as max_counter_per_min
+ FROM (
+ WITH logs_with_concat_data AS (
+ SELECT
+ httprequest.clientip as client_ip,httprequest.country as country, httprequest.uri as uri,
+ from_unixtime(timestamp/1000) as datetime
+ FROM
+ testdb.testtable
+ WHERE year = 2020
+ AND month = 05
+ AND day = 07
+ AND hour between 09 and 13
+ )
+ SELECT
+ client_ip, country, uri,
+ COUNT(*) as counter
+ FROM
+ logs_with_concat_data
+ WHERE
+ datetime > TIMESTAMP '2020-05-07 09:33:00'
+ GROUP BY
+ client_ip, country, uri,
+ date_trunc('minute', datetime)
+ HAVING
+ (COUNT(*) >= 6.0 AND country = 'TR') OR
+ (COUNT(*) >= 20.0 AND country = 'CN') OR
+ (COUNT(*) >= 30.0 AND country = 'SE') OR
+ (COUNT(*) >= 10.0 AND country NOT IN ('TR','CN','SE'))
+) GROUP BY
+ client_ip, country, uri
+ORDER BY
+ max_counter_per_min DESC
+LIMIT 10000;
\ No newline at end of file
diff --git a/source/log_parser/test/test_data/waf_stack-app_log_conf.json b/source/log_parser/test/test_data/waf_stack-app_log_conf.json
new file mode 100644
index 00000000..61d7a2e1
--- /dev/null
+++ b/source/log_parser/test/test_data/waf_stack-app_log_conf.json
@@ -0,0 +1 @@
+{"general": {"errorThreshold": 5, "blockPeriod": 240, "errorCodes": ["400", "401", "403", "404", "405"]}, "uriList": {}}
\ No newline at end of file
diff --git a/source/log_parser/test/test_data/waf_stack-waf_log_conf.json b/source/log_parser/test/test_data/waf_stack-waf_log_conf.json
new file mode 100644
index 00000000..431ceaf7
--- /dev/null
+++ b/source/log_parser/test/test_data/waf_stack-waf_log_conf.json
@@ -0,0 +1,17 @@
+{
+ "general": {
+ "requestThreshold": 20,
+ "blockPeriod": 240,
+ "ignoredSufixes": [".css", ".js", ".jpeg"]
+ },
+ "uriList": {
+ "/socket.io/": {
+ "requestThreshold": 5,
+ "blockPeriod": 100
+ },
+ "/assets/public/images/products/green_smoothie.jpg": {
+ "requestThreshold": 5,
+ "blockPeriod": 100
+ }
+ }
+}
\ No newline at end of file
diff --git a/source/log_parser/test/test_data/waf_stack-waf_log_out.json b/source/log_parser/test/test_data/waf_stack-waf_log_out.json
new file mode 100644
index 00000000..2cc117af
--- /dev/null
+++ b/source/log_parser/test/test_data/waf_stack-waf_log_out.json
@@ -0,0 +1,13 @@
+{
+ "general": {
+ "x.0.0.0": {
+ "max_counter_per_min": 715,
+ "updated_at": "2023-04-24 22:16:11 UTC+0000"
+ },
+ "x.x.0.0": {
+ "max_counter_per_min": 8571,
+ "updated_at": "2023-04-24 22:16:11 UTC+0000"
+ }
+ },
+ "uriList": {}
+}
\ No newline at end of file
diff --git a/source/log_parser/test/test_log_parser.py b/source/log_parser/test/test_log_parser.py
new file mode 100644
index 00000000..cc026b71
--- /dev/null
+++ b/source/log_parser/test/test_log_parser.py
@@ -0,0 +1,137 @@
+###############################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). #
+# You may not use this file except in compliance with the License.
+# A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express #
+# or implied. See the License for the specific language governing permissions#
+# and limitations under the License. #
+###############################################################################
+
+from os import environ
+from log_parser import log_parser
+
+
+UNDEFINED_HANDLER_MESSAGE = "[lambda_handler] undefined handler for this type of event"
+ATHENA_LOG_PARSER_PROCESSED_MESSAGE = "[lambda_handler] Athena scheduler event processed."
+ATHENA_APP_LOG_QUERY_RESULT_PROCESSED_MESSAGE = "[lambda_handler] Athena app log query result processed."
+ATHENA_WAF_LOG_QUERY_RESULT_PROCESSED_MESSAGE = "[lambda_handler] Athena AWS WAF log query result processed."
+APP_LOG_LAMBDA_PARSER_PROCESSED_MESSAGE = "[lambda_handler] App access log file processed."
+WAF_LOG_LAMBDA_PARSER_PROCESSED_MESSAGE = "[lambda_handler] AWS WAF access log file processed."
+TYPE_ERROR_MESSAGE = "TypeError: string indices must be integers"
+
+
+def test_undefined_handler_event():
+ event = {"test": "value"}
+ result = {"message": UNDEFINED_HANDLER_MESSAGE}
+ assert result == log_parser.lambda_handler(event, {})
+
+
+def test_undefined_handler_records(cloudfront_log_lambda_parser_test_event_setup):
+ event = cloudfront_log_lambda_parser_test_event_setup
+ UNDEFINED_HANDLER_MESSAGE = "[lambda_handler] undefined handler for bucket %s" % environ["APP_ACCESS_LOG_BUCKET"]
+ environ.pop('APP_ACCESS_LOG_BUCKET')
+ result = {"message": UNDEFINED_HANDLER_MESSAGE}
+ assert result == log_parser.lambda_handler(event, {})
+
+
+def test_cloudfront_log_athena_parser(app_log_athena_parser_test_event_setup):
+ environ['LOG_TYPE'] = "CLOUDFRONT"
+ event = app_log_athena_parser_test_event_setup
+ result = {"message": ATHENA_LOG_PARSER_PROCESSED_MESSAGE}
+ assert result == log_parser.lambda_handler(event, {})
+ environ.pop('LOG_TYPE')
+
+
+def test_alb_log_athena_parser(app_log_athena_parser_test_event_setup):
+ environ['LOG_TYPE'] = "ALB"
+ event = app_log_athena_parser_test_event_setup
+ result = {"message": ATHENA_LOG_PARSER_PROCESSED_MESSAGE}
+ assert result == log_parser.lambda_handler(event, {})
+ environ.pop('LOG_TYPE')
+
+
+def test_waf_log_athena_parser(waf_log_athena_parser_test_event_setup):
+ environ['LOG_TYPE'] = "WAF"
+ event = waf_log_athena_parser_test_event_setup
+ result = {"message": ATHENA_LOG_PARSER_PROCESSED_MESSAGE}
+ assert result == log_parser.lambda_handler(event, {})
+ environ.pop('LOG_TYPE')
+
+
+def test_app_log_athena_result_processor(app_log_athena_query_result_test_event_setup):
+ event = app_log_athena_query_result_test_event_setup
+ result = {"message": ATHENA_APP_LOG_QUERY_RESULT_PROCESSED_MESSAGE}
+ assert result == log_parser.lambda_handler(event, {})
+ environ.pop('APP_ACCESS_LOG_BUCKET')
+
+
+def test_waf_log_athena_result_processor(waf_log_athena_query_result_test_event_setup):
+ event = waf_log_athena_query_result_test_event_setup
+ result = {"message": ATHENA_WAF_LOG_QUERY_RESULT_PROCESSED_MESSAGE}
+ assert result == log_parser.lambda_handler(event, {})
+ environ.pop('WAF_ACCESS_LOG_BUCKET')
+
+
+def test_cloudfront_log_lambda_parser(cloudfront_log_lambda_parser_test_event_setup):
+ environ['LOG_TYPE'] = "cloudfront"
+ event = cloudfront_log_lambda_parser_test_event_setup
+ result = {"message": APP_LOG_LAMBDA_PARSER_PROCESSED_MESSAGE}
+ assert result == log_parser.lambda_handler(event, {})
+ environ.pop('APP_ACCESS_LOG_BUCKET')
+ environ.pop('LOG_TYPE')
+
+
+def test_alb_log_lambda_parser(alb_log_lambda_parser_test_event_setup):
+ environ['LOG_TYPE'] = "alb"
+ event = alb_log_lambda_parser_test_event_setup
+ result = {"message": APP_LOG_LAMBDA_PARSER_PROCESSED_MESSAGE}
+ assert result == log_parser.lambda_handler(event, {})
+ environ.pop('APP_ACCESS_LOG_BUCKET')
+ environ.pop('LOG_TYPE')
+
+def test_alb_log_lambda_parser_over_ip_range_limit(alb_log_lambda_parser_test_event_setup):
+ environ['LOG_TYPE'] = "alb"
+ environ['LIMIT_IP_ADDRESS_RANGES_PER_IP_MATCH_CONDITION'] = '1'
+ event = alb_log_lambda_parser_test_event_setup
+ result = {"message": APP_LOG_LAMBDA_PARSER_PROCESSED_MESSAGE}
+ assert result == log_parser.lambda_handler(event, {})
+ environ.pop('APP_ACCESS_LOG_BUCKET')
+ environ.pop('LOG_TYPE')
+ environ.pop('LIMIT_IP_ADDRESS_RANGES_PER_IP_MATCH_CONDITION')
+
+
+def test_waf_lambda_parser(waf_log_lambda_parser_test_event_setup):
+ environ['LOG_TYPE'] = "waf"
+ event = waf_log_lambda_parser_test_event_setup
+ result = {"message": WAF_LOG_LAMBDA_PARSER_PROCESSED_MESSAGE}
+ assert result == log_parser.lambda_handler(event, {})
+ environ.pop('WAF_ACCESS_LOG_BUCKET')
+ environ.pop('LOG_TYPE')
+
+
+def test_waf_lambda_parser_over_ip_range_limit(waf_log_lambda_parser_test_event_setup):
+ environ['LOG_TYPE'] = "waf"
+ environ['LIMIT_IP_ADDRESS_RANGES_PER_IP_MATCH_CONDITION'] = '1'
+ event = waf_log_lambda_parser_test_event_setup
+ result = {"message": WAF_LOG_LAMBDA_PARSER_PROCESSED_MESSAGE}
+ assert result == log_parser.lambda_handler(event, {})
+ environ.pop('WAF_ACCESS_LOG_BUCKET')
+ environ.pop('LOG_TYPE')
+ environ.pop('LIMIT_IP_ADDRESS_RANGES_PER_IP_MATCH_CONDITION')
+
+
+def test_lambda_parser_unsupported_log_type(cloudfront_log_lambda_parser_test_event_setup):
+ try:
+ environ['LOG_TYPE'] = "unsupported"
+ event = cloudfront_log_lambda_parser_test_event_setup
+ except Exception as e:
+ assert str(e) == TYPE_ERROR_MESSAGE
+ finally:
+ environ.pop('APP_ACCESS_LOG_BUCKET')
+ environ.pop('LOG_TYPE')
\ No newline at end of file
diff --git a/source/log_parser/test/test_partition_s3_logs.py b/source/log_parser/test/test_partition_s3_logs.py
new file mode 100644
index 00000000..a8e742db
--- /dev/null
+++ b/source/log_parser/test/test_partition_s3_logs.py
@@ -0,0 +1,42 @@
+###############################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). #
+# You may not use this file except in compliance with the License.
+# A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express #
+# or implied. See the License for the specific language governing permissions#
+# and limitations under the License. #
+###############################################################################
+
+from os import environ
+from partition_s3_logs import lambda_handler
+
+def test_partition_s3_cloudfront_log(partition_s3_cloudfront_log_test_event_setup):
+ try:
+ event = partition_s3_cloudfront_log_test_event_setup
+ result = False
+ lambda_handler(event, {})
+ result = True
+ environ.pop('KEEP_ORIGINAL_DATA')
+ environ.pop('ENDPOINT')
+ except Exception:
+ raise
+ assert result == True
+
+
+def test_partition_s3_alb_log(partition_s3_alb_log_test_event_setup):
+ try:
+ event = partition_s3_alb_log_test_event_setup
+ result = False
+ lambda_handler(event, {})
+ result = True
+ environ.pop('KEEP_ORIGINAL_DATA')
+ environ.pop('ENDPOINT')
+ except Exception:
+ raise
+ assert result == True
\ No newline at end of file
diff --git a/source/log_parser/test/test_solution_metrics.py b/source/log_parser/test/test_solution_metrics.py
index 465c634a..7f3db1e3 100644
--- a/source/log_parser/test/test_solution_metrics.py
+++ b/source/log_parser/test/test_solution_metrics.py
@@ -1,5 +1,5 @@
###############################################################################
-# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
# #
# Licensed under the Apache License, Version 2.0 (the "License"). #
# You may not use this file except in compliance with the License.
@@ -17,14 +17,12 @@
def test_send_solution_metrics():
- uuid = "waf3.0_test_00001"
+ uuid = "waf_test_00001"
solution_id = "waf_test"
data = {
- "test_string1": "waf3.0_test",
+ "test_string1": "waf_test",
"test_string2": "test_1"
}
- url = "https://oszclq8tyh.execute-api.us-east-1.amazonaws.com/prod/generic"
- # url = 'https://metrics.awssolutionsbuilder.com/generic'
+ url = "https://testurl.com/generic"
response = send_metrics(data, uuid, solution_id, url)
- status_code = response.status_code
- assert status_code == 200
+ assert response is not None
diff --git a/source/log_parser/testing_requirements.txt b/source/log_parser/testing_requirements.txt
deleted file mode 100644
index 7e3aaf95..00000000
--- a/source/log_parser/testing_requirements.txt
+++ /dev/null
@@ -1,12 +0,0 @@
-botocore>=1.12.99
-boto3>=1.9.99
-mock>=5.0.0
-moto>=4.0.13
-pytest>=7.2.0
-pytest-mock>=3.10.0
-pytest-runner>=6.0.0
-uuid>=1.30
-backoff>=2.2.1
-freezegun>=1.2.2
-pytest-cov
-pytest-env
\ No newline at end of file
diff --git a/source/reputation_lists_parser/.coveragerc b/source/reputation_lists_parser/.coveragerc
new file mode 100644
index 00000000..3aa79036
--- /dev/null
+++ b/source/reputation_lists_parser/.coveragerc
@@ -0,0 +1,29 @@
+[run]
+omit =
+ test/*
+ */__init__.py
+ **/__init__.py
+ backoff/*
+ bin/*
+ boto3/*
+ botocore/*
+ certifi/*
+ charset*/*
+ crhelper*
+ chardet*
+ dateutil/*
+ idna/*
+ jmespath/*
+ lib/*
+ package*
+ python_*
+ requests/*
+ s3transfer/*
+ six*
+ tenacity*
+ tests
+ urllib3/*
+ yaml
+ PyYAML-*
+source =
+ .
\ No newline at end of file
diff --git a/source/reputation_lists_parser/__init__.py b/source/reputation_lists_parser/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/source/reputation_lists_parser/reputation-lists.py b/source/reputation_lists_parser/reputation-lists.py
deleted file mode 100644
index ad4792a8..00000000
--- a/source/reputation_lists_parser/reputation-lists.py
+++ /dev/null
@@ -1,393 +0,0 @@
-######################################################################################################################
-# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. #
-# #
-# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
-# with the License. A copy of the License is located at #
-# #
-# http://www.apache.org/licenses/LICENSE-2.0 #
-# #
-# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
-# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
-# and limitations under the License. #
-######################################################################################################################
-import datetime
-import logging
-import sys
-import os
-import requests
-import json
-import re
-from time import sleep
-from ipaddress import ip_address
-from ipaddress import ip_network
-from ipaddress import IPv4Network
-from ipaddress import IPv6Network
-import boto3
-from os import environ
-from botocore.config import Config
-from lib.solution_metrics import send_metrics
-from lib.waflibv2 import WAFLIBv2
-from lib.boto3_util import create_client
-
-waflib = WAFLIBv2()
-
-delay_between_updates = 5
-
-# Find matching ip address ranges from a line
-def find_ips(line, prefix=""):
- reg = re.compile('^' + prefix + '\\s*((?:(?:25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9][0-9]|[0-9])\\.){3}(?:25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9][0-9]|[0-9])(?:/(?:3[0-2]|[1-2][0-9]|[0-9]))?)')
- ips = re.findall(reg, line)
-
- return ips
-
-# Read each address from source URL
-def read_url_list(log, current_list, url, prefix=""):
- try:
- log.info("[read_url_list]reading url " + url)
- file = requests.get(url, timeout=600)
- new_ip_count = 0
- line_count = 0
- current_ip_count = len(current_list)
-
- # Proceed if request returns success code 200
- if file.status_code == 200:
- for line in file.iter_lines():
- decoded_line = line.decode("utf-8").strip() # remove spaces on either end of string
- line_count = line_count + 1
- new_ips = find_ips(decoded_line, prefix)
- current_list = list(set(current_list) | set(new_ips))
- new_ip_count = new_ip_count + len(new_ips)
-
- log.info("[read_url_list]"+ str(new_ip_count) + " ip address ranges read from " + url + "; " + str(line_count) + " lines")
- log.info("[read_url_list]number of new ip address ranges added to current list: " + str(len(current_list) - current_ip_count)
- + "; total number of ip address ranges on curent list: " + str(len(current_list)))
- except Exception as e:
- log.error(e)
-
- return current_list
-
-
-# Fully qualify each address with network cidr
-def process_url_list(log, current_list):
- process_list = []
- for source_ip in current_list:
- try:
- ip_type = "IPV%s" % ip_address(source_ip).version
- if (ip_type == "IPV4"):
- process_list.append(IPv4Network(source_ip).with_prefixlen)
- elif (ip_type == "IPV6"):
- process_list.append(IPv6Network(source_ip).with_prefixlen)
- except:
- try:
- if (ip_network(source_ip)):
- process_list.append(source_ip)
- except Exception as e:
- log.debug(source_ip + " not an IP address.")
- return process_list
-
-
-# push each source_ip into the appropriate IPSet
-def populate_ipsets(log, scope, ipset_name_v4, ipset_name_v6, ipset_arn_v4, ipset_arn_v6, current_list):
- addressesV4 = []
- addressesV6 = []
-
- for address in current_list:
- try:
- source_ip = address.split("/")[0]
- ip_type = "IPV%s" % ip_address(source_ip).version
- if ip_type == "IPV4":
- addressesV4.append(address)
- elif ip_type == "IPV6":
- addressesV6.append(address)
- except Exception as e:
- log.error(e)
-
- waflib.update_ip_set(log, scope, ipset_name_v4, ipset_arn_v4, addressesV4)
- ipset = waflib.get_ip_set(log, scope, ipset_name_v4, ipset_arn_v4)
-
- log.info(ipset)
- log.info("There are %d IP addresses in IPSet %s", len(ipset["IPSet"]["Addresses"]), ipset_name_v4)
-
- # Sleep for a few seconds to mitigate AWS WAF Update API call throttling issue
- sleep(delay_between_updates)
-
- waflib.update_ip_set(log, scope, ipset_name_v6, ipset_arn_v6, addressesV6)
- ipset = waflib.get_ip_set(log, scope, ipset_name_v6, ipset_arn_v6)
-
- log.info(ipset)
- log.info("There are %d IP addresses in IPSet %s", len(ipset["IPSet"]["Addresses"]), ipset_name_v6)
-
- return
-
-
-def send_response(log, event, context, responseStatus, responseData, resourceId, reason=None):
- log.debug("[send_response] Start")
-
- responseUrl = event['ResponseURL']
- cw_logs_url = "https://console.aws.amazon.com/cloudwatch/home?region=%s#logEventViewer:group=%s;stream=%s" % (
- context.invoked_function_arn.split(':')[3], context.log_group_name, context.log_stream_name)
-
- log.info(responseUrl)
- responseBody = {}
- responseBody['Status'] = responseStatus
- responseBody['Reason'] = reason or ('See the details in CloudWatch Logs: ' + cw_logs_url)
- responseBody['PhysicalResourceId'] = resourceId
- responseBody['StackId'] = event['StackId']
- responseBody['RequestId'] = event['RequestId']
- responseBody['LogicalResourceId'] = event['LogicalResourceId']
- responseBody['NoEcho'] = False
- responseBody['Data'] = responseData
-
- json_responseBody = json.dumps(responseBody)
- log.debug("Response body:\n" + json_responseBody)
-
- headers = {
- 'content-type': '',
- 'content-length': str(len(json_responseBody))
- }
-
- try:
- response = requests.put(responseUrl,
- data=json_responseBody,
- headers=headers,
- timeout=600)
- log.debug("Status code: " + response.reason)
-
- except Exception as error:
- log.error("[send_response] Failed executing requests.put(..)")
- log.error(str(error))
-
- log.debug("[send_response] End")
-
-
-def send_anonymous_usage_data(log, scope):
- try:
- if 'SEND_ANONYMOUS_USAGE_DATA' not in os.environ or os.getenv('SEND_ANONYMOUS_USAGE_DATA').lower() != 'yes':
- return
-
- log.debug("[send_anonymous_usage_data] Start")
- cw = create_client('cloudwatch')
- usage_data = {
- "data_type": "reputation_lists",
- "ipv4_reputation_lists_size": 0,
- "ipv4_reputation_lists": 0,
- "ipv6_reputation_lists_size": 0,
- "ipv6_reputation_lists": 0,
- "allowed_requests": 0,
- "blocked_requests": 0,
- "blocked_requests_ip_reputation_lists": 0,
- "waf_type": os.getenv('LOG_TYPE')
- }
-
- # --------------------------------------------------------------------------------------------------------------
- log.debug("[send_anonymous_usage_data] Get size of the Reputation List IP set")
- # --------------------------------------------------------------------------------------------------------------
- try:
- response = waflib.get_ip_set(log, scope, os.getenv('IP_SET_NAME_REPUTATIONV4'),
- os.getenv('IP_SET_ID_REPUTATIONV4'))
-
- if response is not None:
- usage_data['ipv4_reputation_lists_size'] = len(response['IPSet']['Addresses'])
- usage_data['ipv4_reputation_lists'] = response['IPSet']['Addresses']
-
- except Exception as error:
- log.debug("[send_anonymous_usage_data] Failed to get size of the Reputation List IPV4 set")
- log.debug(str(error))
-
- try:
- response = waflib.get_ip_set(log, scope, os.getenv('IP_SET_NAME_REPUTATIONV6'),
- os.getenv('IP_SET_ID_REPUTATIONV6'))
- if response is not None:
- usage_data['ipv6_reputation_lists_size'] = len(response['IPSet']['Addresses'])
- usage_data['ipv6_reputation_lists'] = response['IPSet']['Addresses']
-
- except Exception as error:
- log.debug("[send_anonymous_usage_data] Failed to get size of the Reputation List IPV6 set")
- log.debug(str(error))
-
- # --------------------------------------------------------------------------------------------------------------
- log.debug("[send_anonymous_usage_data] Get total number of allowed requests")
- # --------------------------------------------------------------------------------------------------------------
- try:
- response = cw.get_metric_statistics(
- MetricName='AllowedRequests',
- Namespace='AWS/WAFV2',
- Statistics=['Sum'],
- Period=3600,
- StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=3600),
- EndTime=datetime.datetime.utcnow(),
- Dimensions=[
- {
- "Name": "Rule",
- "Value": "ALL"
- },
- {
- "Name": "WebACL",
- "Value": os.getenv('STACK_NAME')
- },
- {
- "Name": "Region",
- "Value": os.getenv('AWS_REGION')
- }
- ]
- )
-
- if len(response['Datapoints']):
- usage_data['allowed_requests'] = response['Datapoints'][0]['Sum']
-
- except Exception as error:
- log.debug("[send_anonymous_usage_data] Failed to get Num Allowed Requests")
- log.debug(str(error))
-
- # --------------------------------------------------------------------------------------------------------------
- log.debug("[send_anonymous_usage_data] Get total number of blocked requests")
- # --------------------------------------------------------------------------------------------------------------
- try:
- response = cw.get_metric_statistics(
- MetricName='BlockedRequests',
- Namespace='AWS/WAFV2',
- Statistics=['Sum'],
- Period=3600,
- StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=3600),
- EndTime=datetime.datetime.utcnow(),
- Dimensions=[
- {
- "Name": "Rule",
- "Value": "ALL"
- },
- {
- "Name": "WebACL",
- "Value": os.getenv('STACK_NAME')
- },
- {
- "Name": "Region",
- "Value": os.getenv('AWS_REGION')
- }
- ]
- )
-
- if len(response['Datapoints']):
- usage_data['blocked_requests'] = response['Datapoints'][0]['Sum']
-
- except Exception as error:
- log.debug("[send_anonymous_usage_data] Failed to get Num Allowed Requests")
- log.debug(str(error))
-
- # --------------------------------------------------------------------------------------------------------------
- log.debug("[send_anonymous_usage_data] Get total number of blocked requests for Reputation Lists Rule")
- # --------------------------------------------------------------------------------------------------------------
- try:
- response = cw.get_metric_statistics(
- MetricName='BlockedRequests',
- Namespace='AWS/WAFV2',
- Statistics=['Sum'],
- Period=3600,
- StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=3600),
- EndTime=datetime.datetime.utcnow(),
- Dimensions=[
- {
- "Name": "Rule",
- "Value": os.getenv('IPREPUTATIONLIST_METRICNAME')
- },
- {
- "Name": "WebACL",
- "Value": os.getenv('STACK_NAME')
- },
- {
- "Name": "Region",
- "Value": os.getenv('AWS_REGION')
- }
- ]
- )
-
- if len(response['Datapoints']):
- usage_data['blocked_requests_ip_reputation_lists'] = response['Datapoints'][0]['Sum']
-
- except Exception as error:
- log.debug("[send_anonymous_usage_data] Failed to get Num Allowed Requests")
- log.debug(str(error))
-
- # --------------------------------------------------------------------------------------------------------------
- log.info("[send_anonymous_usage_data] Send Data")
- # --------------------------------------------------------------------------------------------------------------
-
- response = send_metrics(data=usage_data)
- response_code = response.status_code
- log.debug('[send_anonymous_usage_data] Response Code: {}'.format(response_code))
- log.debug("[send_anonymous_usage_data] End")
- except Exception as error:
- log.debug("[send_anonymous_usage_data] Failed to send data")
-
-
-# ======================================================================================================================
-# Lambda Entry Point
-# ======================================================================================================================
-
-def lambda_handler(event, context):
- log = logging.getLogger()
- log.info('[lambda_handler] Start')
-
- responseStatus = 'SUCCESS'
- reason = None
- responseData = {}
- result = {
- 'StatusCode': '200',
- 'Body': {'message': 'success'}
- }
- log_level = str(os.getenv('LOG_LEVEL').upper())
- if log_level not in ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']:
- log_level = 'ERROR'
- log.setLevel(log_level)
-
- current_list = []
- try:
- scope = os.getenv('SCOPE')
- ipset_name_v4 = os.getenv('IP_SET_NAME_REPUTATIONV4')
- ipset_name_v6 = os.getenv('IP_SET_NAME_REPUTATIONV6')
- ipset_arn_v4 = os.getenv('IP_SET_ID_REPUTATIONV4')
- ipset_arn_v6 = os.getenv('IP_SET_ID_REPUTATIONV6')
- URL_LIST = os.getenv('URL_LIST')
- url_list = json.loads(URL_LIST)
-
- log.info("SCOPE = %s", scope)
- log.info("ipset_name_v4 = %s", ipset_name_v4)
- log.info("ipset_name_v6 = %s", ipset_name_v6)
- log.info("ipset_arn_v4 = %s", ipset_arn_v4)
- log.info("ipset_arn_v6 = %s", ipset_arn_v6)
- log.info("URLLIST = %s", url_list)
- except Exception as e:
- log.error(e)
- raise
-
- try:
- for info in url_list:
- try:
- if("prefix" in info):
- current_list = read_url_list(log, current_list, info["url"], info["prefix"])
- else:
- current_list = read_url_list(log, current_list, info["url"])
- except:
- log.error("URL info not valid %s", info)
-
- current_list = sorted(current_list, key=str)
- current_list = process_url_list(log, current_list)
-
- populate_ipsets(log, scope, ipset_name_v4, ipset_name_v6, ipset_arn_v4, ipset_arn_v6, current_list)
- send_anonymous_usage_data(log, scope)
-
- except Exception as error:
- log.error(str(error))
- responseStatus = 'FAILED'
- reason = str(error)
- result = {
- 'statusCode': '400',
- 'body': {'message': reason}
- }
- finally:
- log.info('[lambda_handler] End')
- if 'ResponseURL' in event:
- resourceId = event['PhysicalResourceId'] if 'PhysicalResourceId' in event else event['LogicalResourceId']
- log.info("ResourceId %s", resourceId)
- send_response(log, event, context, responseStatus, responseData, resourceId, reason)
-
- return json.dumps(result)
diff --git a/source/reputation_lists_parser/reputation_lists.py b/source/reputation_lists_parser/reputation_lists.py
new file mode 100644
index 00000000..5c1927c8
--- /dev/null
+++ b/source/reputation_lists_parser/reputation_lists.py
@@ -0,0 +1,285 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+import os
+import requests
+import json
+import re
+from time import sleep
+from ipaddress import ip_address
+from ipaddress import ip_network
+from ipaddress import IPv4Network
+from ipaddress import IPv6Network
+from os import environ
+from lib.solution_metrics import send_metrics
+from lib.waflibv2 import WAFLIBv2
+from lib.cfn_response import send_response
+from lib.cw_metrics_util import WAFCloudWatchMetrics
+from lib.logging_util import set_log_level
+
+waflib = WAFLIBv2()
+
+delay_between_updates = 5
+CW_METRIC_PERIOD_SECONDS = 3600 # One hour in seconds
+
+# Find matching ip address ranges from a line
+def find_ips(line, prefix=""):
+ reg = re.compile('^' + prefix + '\\s*((?:(?:25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9][0-9]|[0-9])\\.){3}(?:25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9][0-9]|[0-9])(?:/(?:3[0-2]|[1-2][0-9]|[0-9]))?)')
+ ips = re.findall(reg, line)
+
+ return ips
+
+# Read each address from source URL
+def read_url_list(log, current_list, url, prefix=""):
+ try:
+ log.info("[read_url_list]reading url " + url)
+ response = requests.get(url, timeout=30)
+ new_ip_count = 0
+ line_count = 0
+ current_ip_count = len(current_list)
+
+ # Proceed if request returns success code 200
+ if response.status_code == 200:
+ for line in response.iter_lines():
+ decoded_line = line.decode("utf-8").strip() # remove spaces on either end of string
+ line_count = line_count + 1
+ new_ips = find_ips(decoded_line, prefix)
+ current_list = list(set(current_list) | set(new_ips))
+ new_ip_count = new_ip_count + len(new_ips)
+
+ log.info("[read_url_list]"+ str(new_ip_count) + " ip address ranges read from " + url + "; " + str(line_count) + " lines")
+ log.info("[read_url_list]number of new ip address ranges added to current list: " + str(len(current_list) - current_ip_count)
+ + "; total number of ip address ranges on curent list: " + str(len(current_list)))
+ except Exception as e:
+ log.error(e)
+
+ return current_list
+
+
+# Fully qualify each address with network cidr
+def process_url_list(log, current_list):
+ process_list = []
+ for source_ip in current_list:
+ try:
+ ip_type = "IPV%s" % ip_address(source_ip).version
+ if (ip_type == "IPV4"):
+ process_list.append(IPv4Network(source_ip).with_prefixlen)
+ elif (ip_type == "IPV6"):
+ process_list.append(IPv6Network(source_ip).with_prefixlen)
+ except:
+ try:
+ if (ip_network(source_ip)):
+ process_list.append(source_ip)
+ except Exception:
+ log.debug(source_ip + " not an IP address.")
+ return process_list
+
+
+# push each source_ip into the appropriate IPSet
+def populate_ipsets(log, scope, ipset_name_v4, ipset_name_v6, ipset_arn_v4, ipset_arn_v6, current_list):
+ addresses_v4 = []
+ addresses_v6 = []
+
+ for address in current_list:
+ try:
+ source_ip = address.split("/")[0]
+ ip_type = "IPV%s" % ip_address(source_ip).version
+ if ip_type == "IPV4":
+ addresses_v4.append(address)
+ elif ip_type == "IPV6":
+ addresses_v6.append(address)
+ except Exception as e:
+ log.error(e)
+
+ waflib.update_ip_set(log, scope, ipset_name_v4, ipset_arn_v4, addresses_v4)
+ ipset = waflib.get_ip_set(log, scope, ipset_name_v4, ipset_arn_v4)
+
+ log.info(ipset)
+ log.info("There are %d IP addresses in IPSet %s", len(ipset["IPSet"]["Addresses"]), ipset_name_v4)
+
+ # Sleep for a few seconds to mitigate AWS WAF Update API call throttling issue
+ sleep(delay_between_updates)
+
+ waflib.update_ip_set(log, scope, ipset_name_v6, ipset_arn_v6, addresses_v6)
+ ipset = waflib.get_ip_set(log, scope, ipset_name_v6, ipset_arn_v6)
+
+ log.info(ipset)
+ log.info("There are %d IP addresses in IPSet %s", len(ipset["IPSet"]["Addresses"]), ipset_name_v6)
+
+
+def initialize_usage_data():
+ usage_data = {
+ "data_type": "reputation_lists",
+ "ipv4_reputation_lists_size": 0,
+ "ipv4_reputation_lists": 0,
+ "ipv6_reputation_lists_size": 0,
+ "ipv6_reputation_lists": 0,
+ "allowed_requests": 0,
+ "blocked_requests": 0,
+ "blocked_requests_ip_reputation_lists": 0,
+ "waf_type": os.getenv('LOG_TYPE'),
+ "provisioner": os.getenv('provisioner') if "provisioner" in environ else "cfn"
+ }
+ return usage_data
+
+
+def get_ip_reputation_usage_data(log, scope, ipset_name,
+ ipset_arn, usage_data,
+ usage_data_ip_list_size_field,
+ usage_data_ip_list_field):
+ log.info("[get_ip_reputation_usage_data] Get size of %s", ipset_name)
+
+ # Get ip reputation ipv4 and ipv6 lists
+ if 'IP_SET_ID_REPUTATIONV4' in environ or 'IP_SET_ID_REPUTATIONV6' in environ:
+ response = waflib.get_ip_set(log, scope, ipset_name, ipset_arn)
+
+ if response is not None:
+ usage_data[usage_data_ip_list_size_field] = len(response['IPSet']['Addresses'])
+ usage_data[usage_data_ip_list_field] = response['IPSet']['Addresses']
+ return usage_data
+
+
+def send_anonymous_usage_data(log, scope):
+ try:
+ if 'SEND_ANONYMOUS_USAGE_DATA' not in os.environ or os.getenv('SEND_ANONYMOUS_USAGE_DATA').lower() != 'yes':
+ return
+
+ log.debug("[send_anonymous_usage_data] Start")
+ cw = WAFCloudWatchMetrics(log)
+ usage_data = initialize_usage_data()
+
+ usage_data = get_ip_reputation_usage_data(
+ log, scope,
+ os.getenv('IP_SET_NAME_REPUTATIONV4'),
+ os.getenv('IP_SET_ID_REPUTATIONV4'),
+ usage_data,
+ 'ipv4_reputation_lists_size',
+ 'ipv4_reputation_lists'
+ )
+
+ usage_data = get_ip_reputation_usage_data(
+ log, scope,
+ os.getenv('IP_SET_NAME_REPUTATIONV6'),
+ os.getenv('IP_SET_ID_REPUTATIONV6'),
+ usage_data,
+ 'ipv6_reputation_lists_size',
+ 'ipv6_reputation_lists'
+ )
+
+ # Get the count of allowed requests for all the waf rules from cloudwatch metrics
+ usage_data = cw.add_waf_cw_metric_to_usage_data(
+ 'AllowedRequests',
+ CW_METRIC_PERIOD_SECONDS,
+ 'ALL',
+ usage_data,
+ 'allowed_requests',
+ 0
+ )
+
+ # Get the count of blocked requests for all the waf rules from cloudwatch metrics
+ usage_data = cw.add_waf_cw_metric_to_usage_data(
+ 'BlockedRequests',
+ CW_METRIC_PERIOD_SECONDS,
+ 'ALL',
+ usage_data,
+ 'blocked_requests',
+ 0
+ )
+
+ # Get the count of blocked requests for the Reputation Lists Rule from cloudwatch metrics
+ usage_data = cw.add_waf_cw_metric_to_usage_data(
+ 'BlockedRequests',
+ CW_METRIC_PERIOD_SECONDS,
+ os.getenv('IPREPUTATIONLIST_METRICNAME'),
+ usage_data,
+ 'blocked_requests_ip_reputation_lists',
+ 0
+ )
+
+ # Send usage data
+ log.info('[send_anonymous_usage_data] Send usage data: \n{}'.format(usage_data))
+ response = send_metrics(data=usage_data)
+ response_code = response.status_code
+ log.debug('[send_anonymous_usage_data] Response Code: {}'.format(response_code))
+ log.debug("[send_anonymous_usage_data] End")
+ except Exception:
+ log.debug("[send_anonymous_usage_data] Failed to send data")
+
+
+# ======================================================================================================================
+# Lambda Entry Point
+# ======================================================================================================================
+
+def lambda_handler(event, context):
+ log = set_log_level()
+ log.info('[lambda_handler] Start')
+
+ response_status = 'SUCCESS'
+ reason = None
+ response_data = {}
+ result = {
+ 'StatusCode': '200',
+ 'Body': {'message': 'success'}
+ }
+
+ current_list = []
+ try:
+ scope = os.getenv('SCOPE')
+ ipset_name_v4 = os.getenv('IP_SET_NAME_REPUTATIONV4')
+ ipset_name_v6 = os.getenv('IP_SET_NAME_REPUTATIONV6')
+ ipset_arn_v4 = os.getenv('IP_SET_ID_REPUTATIONV4')
+ ipset_arn_v6 = os.getenv('IP_SET_ID_REPUTATIONV6')
+ URL_LIST = os.getenv('URL_LIST')
+ url_list = json.loads(URL_LIST)
+
+ log.info("SCOPE = %s", scope)
+ log.info("ipset_name_v4 = %s", ipset_name_v4)
+ log.info("ipset_name_v6 = %s", ipset_name_v6)
+ log.info("ipset_arn_v4 = %s", ipset_arn_v4)
+ log.info("ipset_arn_v6 = %s", ipset_arn_v6)
+ log.info("URLLIST = %s", url_list)
+ except Exception as e:
+ log.error(e)
+ raise
+
+ try:
+ for info in url_list:
+ try:
+ if("prefix" in info):
+ current_list = read_url_list(log, current_list, info["url"], info["prefix"])
+ else:
+ current_list = read_url_list(log, current_list, info["url"])
+ except:
+ log.error("URL info not valid %s", info)
+
+ current_list = sorted(current_list, key=str)
+ current_list = process_url_list(log, current_list)
+
+ populate_ipsets(log, scope, ipset_name_v4, ipset_name_v6, ipset_arn_v4, ipset_arn_v6, current_list)
+ send_anonymous_usage_data(log, scope)
+
+ except Exception as error:
+ log.error(str(error))
+ response_status = 'FAILED'
+ reason = str(error)
+ result = {
+ 'statusCode': '400',
+ 'body': {'message': reason}
+ }
+ finally:
+ log.info('[lambda_handler] End')
+ if 'ResponseURL' in event:
+ resource_id = event['PhysicalResourceId'] if 'PhysicalResourceId' in event else event['LogicalResourceId']
+ log.info("ResourceId %s", resource_id)
+ send_response(log, event, context, response_status, response_data, resource_id, reason)
+
+ return json.dumps(result)
diff --git a/source/reputation_lists_parser/requirements.txt b/source/reputation_lists_parser/requirements.txt
index 511213cc..635b9d03 100644
--- a/source/reputation_lists_parser/requirements.txt
+++ b/source/reputation_lists_parser/requirements.txt
@@ -1,2 +1,2 @@
-requests>=2.28.2
-backoff>=2.2.1
\ No newline at end of file
+requests~=2.28.2
+backoff~=2.2.1
\ No newline at end of file
diff --git a/source/reputation_lists_parser/requirements_dev.txt b/source/reputation_lists_parser/requirements_dev.txt
new file mode 100644
index 00000000..ab317bdd
--- /dev/null
+++ b/source/reputation_lists_parser/requirements_dev.txt
@@ -0,0 +1,11 @@
+botocore~=1.29.85
+boto3~=1.26.85
+mock~=5.0.1
+moto~=4.1.4
+pytest~=7.2.2
+pytest-mock~=3.10.0
+pytest-runner~=6.0.0
+freezegun~=1.2.2
+pytest-cov~=4.0.0
+pytest-env~=0.8.1
+pyparsing~=3.0.9
\ No newline at end of file
diff --git a/source/reputation_lists_parser/test/__init__.py b/source/reputation_lists_parser/test/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/source/reputation_lists_parser/test/conftest.py b/source/reputation_lists_parser/test/conftest.py
new file mode 100644
index 00000000..5bae4722
--- /dev/null
+++ b/source/reputation_lists_parser/test/conftest.py
@@ -0,0 +1,31 @@
+###############################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). #
+# You may not use this file except in compliance with the License.
+# A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express #
+# or implied. See the License for the specific language governing permissions#
+# and limitations under the License. #
+###############################################################################
+
+import pytest
+from os import environ
+
+
+@pytest.fixture(scope='module', autouse=True)
+def test_environment_vars_setup():
+ environ['IP_SET_NAME_REPUTATIONV4'] = 'test_ReputationListsSetIPV4'
+ environ['IP_SET_NAME_REPUTATIONV6'] = 'test_ReputationListsSetIPV6'
+ environ['IP_SET_ID_REPUTATIONV4'] = 'arn:aws:wafv2:us-east-1:11111111111:regional/ipset/test'
+ environ['IP_SET_ID_REPUTATIONV6'] = 'arn:aws:wafv2:us-east-1:11111111111:regional/ipset/test'
+ environ['SCOPE'] = 'REGIONAL'
+ environ['SEND_ANONYMOUS_USAGE_DATA'] = 'Yes'
+ environ['LOG_LEVEL'] = 'INFO'
+ environ['UUID'] = 'test_uuid'
+ environ['SOLUTION_ID'] = 'SO0006'
+ environ['METRICS_URL'] = 'https://testurl.com/generic'
\ No newline at end of file
diff --git a/source/reputation_lists_parser/test/test_data/__init__.py b/source/reputation_lists_parser/test/test_data/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/source/reputation_lists_parser/test/test_data/test_data.txt b/source/reputation_lists_parser/test/test_data/test_data.txt
new file mode 100644
index 00000000..f35a41c8
--- /dev/null
+++ b/source/reputation_lists_parser/test/test_data/test_data.txt
@@ -0,0 +1,9 @@
+; Test Project EDROP List 2023/04/28 - (c) 2023 The Test Project
+; https://www.testendpoint.com/testdummyvalues
+; Last-Modified: Fri, 28 Apr 2023 11:06:55 GMT
+; Expires: Sat, 29 Apr 2023 15:39:42 GMT
+x.x.x.x/x ; abcd1234
+x.x.x.x/x ; abcd1234
+x.x.x.x/x ; abcd1234
+x.x.x.x/x ; abcd1234
+x.x.x.x/x ; abcd1234
diff --git a/source/reputation_lists_parser/test/test_reputation_lists_parser.py b/source/reputation_lists_parser/test/test_reputation_lists_parser.py
new file mode 100644
index 00000000..091b1877
--- /dev/null
+++ b/source/reputation_lists_parser/test/test_reputation_lists_parser.py
@@ -0,0 +1,79 @@
+##############################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). #
+# You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is #
+# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY #
+# KIND, express or implied. See the License for the specific language #
+# governing permissions and limitations under the License. #
+##############################################################################
+
+from reputation_lists_parser import reputation_lists
+from lib.cw_metrics_util import WAFCloudWatchMetrics
+from os import environ
+import pytest
+import requests
+
+
+def test_lambda_handler_raises_exception_if_env_variable_not_present(mocker):
+ event = {}
+ context = {}
+ mocker.patch.object(WAFCloudWatchMetrics, 'add_waf_cw_metric_to_usage_data')
+ with pytest.raises(TypeError):
+ reputation_lists.lambda_handler(event, context)
+
+
+def test_lambda_handler_returns_error_when_populate_ip_sets_function_fails(mocker):
+ event = {}
+ context = {}
+ environ['URL_LIST'] = '[{"url":"https://www.testmocketenvtest.com"},' \
+ '{"url":"https://www.testmocketenvagaintest.com"}] '
+ mocker.patch.object(reputation_lists, 'populate_ipsets', side_effect=Exception('mocked error'))
+ mocker.patch.object(requests, 'get')
+ response = reputation_lists.lambda_handler(event, context)
+ assert response == '{"statusCode": "400", "body": {"message": "mocked error"}}'
+
+
+def test_lambda_handler_returns_success(mocker):
+ event = {}
+ context = {}
+ environ['URL_LIST'] = '[{"url":"https://www.testmocketenvtest.com"},' \
+ '{"url":"https://www.testmocketenvagaintest.com"}] '
+ mocker.patch.object(requests, 'get')
+ with open('./test/test_data/test_data.txt', 'r') as file:
+ test_data = file.read()
+ requests.get.return_value = test_data
+ ip_set = {'IPSet':
+ {
+ 'Name': 'prodIPReputationListsSetIPV6',
+ 'Id': '4342423-d428-4e9d-ba3a-376737347db',
+ 'ARN': 'arn:aws:wafv2:us-east-1:111111111:regional/ipset/ptestvalue',
+ 'Description': 'Block Reputation List IPV6 addresses',
+ 'IPAddressVersion': 'IPV6',
+ 'Addresses': []
+ },
+ 'LockToken': 'test-token',
+ 'ResponseMetadata': {
+ 'RequestId': 'test-id',
+ 'HTTPStatusCode': 200,
+ 'HTTPHeaders':
+ {'x-amzn-requestid': 'test-id',
+ 'content-type': 'application/x-amz-json-1.1',
+ 'content-length': 'test',
+ 'date': 'Thu, 27 Apr 2023 03:50:24 GMT'},
+ 'RetryAttempts': 0
+ }
+ }
+ mocker.patch.object(reputation_lists.waflib, 'update_ip_set')
+ mocker.patch.object(reputation_lists.waflib, 'get_ip_set')
+ mocker.patch.object(reputation_lists.waflib, 'update_ip_set')
+ mocker.patch.object(reputation_lists.waflib, 'get_ip_set')
+ mocker.patch.object(WAFCloudWatchMetrics, 'add_waf_cw_metric_to_usage_data')
+ reputation_lists.waflib.get_ip_set.return_value = ip_set
+ response = reputation_lists.lambda_handler(event, context)
+ assert response == '{"StatusCode": "200", "Body": {"message": "success"}}'
diff --git a/source/timer/.coveragerc b/source/timer/.coveragerc
new file mode 100644
index 00000000..3aa79036
--- /dev/null
+++ b/source/timer/.coveragerc
@@ -0,0 +1,29 @@
+[run]
+omit =
+ test/*
+ */__init__.py
+ **/__init__.py
+ backoff/*
+ bin/*
+ boto3/*
+ botocore/*
+ certifi/*
+ charset*/*
+ crhelper*
+ chardet*
+ dateutil/*
+ idna/*
+ jmespath/*
+ lib/*
+ package*
+ python_*
+ requests/*
+ s3transfer/*
+ six*
+ tenacity*
+ tests
+ urllib3/*
+ yaml
+ PyYAML-*
+source =
+ .
\ No newline at end of file
diff --git a/source/timer/__init__.py b/source/timer/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/source/timer/requirements.txt b/source/timer/requirements.txt
index e000c3a7..7656ef0a 100644
--- a/source/timer/requirements.txt
+++ b/source/timer/requirements.txt
@@ -1 +1 @@
-requests>=2.22.0
\ No newline at end of file
+requests~=2.28.2
\ No newline at end of file
diff --git a/source/timer/requirements_dev.txt b/source/timer/requirements_dev.txt
new file mode 100644
index 00000000..1f9e6301
--- /dev/null
+++ b/source/timer/requirements_dev.txt
@@ -0,0 +1,10 @@
+botocore~=1.29.85
+boto3~=1.26.85
+mock~=5.0.1
+moto~=4.1.4
+pytest~=7.2.2
+pytest-mock~=3.10.0
+pytest-runner~=6.0.0
+freezegun~=1.2.2
+pytest-cov~=4.0.0
+pytest-env~=0.8.1
\ No newline at end of file
diff --git a/source/timer/test/__init__.py b/source/timer/test/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/source/timer/test/conftest.py b/source/timer/test/conftest.py
new file mode 100644
index 00000000..f22357c0
--- /dev/null
+++ b/source/timer/test/conftest.py
@@ -0,0 +1,43 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
+import pytest
+
+class Context:
+ def __init__(self, invoked_function_arn, log_group_name, log_stream_name):
+ self.invoked_function_arn = invoked_function_arn
+ self.log_group_name = log_group_name
+ self.log_stream_name = log_stream_name
+
+
+@pytest.fixture(scope="session")
+def example_context():
+ return Context(':::invoked_function_arn', 'log_group_name', 'log_stream_name')
+
+
+@pytest.fixture(scope="session")
+def timer_event():
+ return {
+ 'LogicalResourceId': 'Timer',
+ 'RequestId': '25d75d10-c5fa-48da-a79a-d827bfe0a465',
+ 'RequestType': 'Create',
+ 'ResourceProperties': {
+ 'DeliveryStreamArn': 'arn:aws:firehose:us-east-2:XXXXXXXXXXXX:deliverystream/aws-waf-logs-wafohio_xToOQk',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'WAFWebACLArn': 'arn:aws:wafv2:us-east-2:XXXXXXXXXXXX:regional/webacl/wafohio/c2e77a1b-6bb3-4d9d-86f9-0bfd9b6fdcaf'
+ },
+ 'ResourceType': 'Custom::Timer',
+ 'ResponseURL': 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/',
+ 'ServiceToken': 'arn:aws:lambda:us-east-2:XXXXXXXXXXXX:function:wafohio-CustomResource-WnfNLnBqtXPF',
+ 'StackId': 'arn:aws:cloudformation:us-east-2:XXXXXXXXXXXX:stack/wafohio/70c177d0-e2c7-11ed-9e83-02ff465f0e71'
+ }
\ No newline at end of file
diff --git a/source/timer/test/test_timer.py b/source/timer/test/test_timer.py
new file mode 100644
index 00000000..04ed272e
--- /dev/null
+++ b/source/timer/test/test_timer.py
@@ -0,0 +1,19 @@
+######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. A copy of the License is located at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES #
+# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
+# and limitations under the License. #
+######################################################################################################################
+
+from timer.timer import lambda_handler
+
+def test_timer(timer_event, example_context):
+ result = lambda_handler(timer_event, example_context)
+ expected = '{"StatusCode": "200", "Body": {"message": "success"}}'
+ assert result == expected
\ No newline at end of file
diff --git a/source/timer/timer.py b/source/timer/timer.py
index b969a255..88122481 100644
--- a/source/timer/timer.py
+++ b/source/timer/timer.py
@@ -1,5 +1,5 @@
######################################################################################################################
-# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
# #
# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
# with the License. A copy of the License is located at #
@@ -11,74 +11,29 @@
# and limitations under the License. #
######################################################################################################################
-import logging
import time
-import sys
import os
-import requests
import json
-
-
-def send_response(log, event, context, responseStatus, responseData, resourceId, reason=None):
- log.debug("[send_response] Start")
-
- responseUrl = event['ResponseURL']
- cw_logs_url = "https://console.aws.amazon.com/cloudwatch/home?region=%s#logEventViewer:group=%s;stream=%s" % (
- context.invoked_function_arn.split(':')[3], context.log_group_name, context.log_stream_name)
-
- log.info(responseUrl)
- responseBody = {}
- responseBody['Status'] = responseStatus
- responseBody['Reason'] = reason or ('See the details in CloudWatch Logs: ' + cw_logs_url)
- responseBody['PhysicalResourceId'] = resourceId
- responseBody['StackId'] = event['StackId']
- responseBody['RequestId'] = event['RequestId']
- responseBody['LogicalResourceId'] = event['LogicalResourceId']
- responseBody['NoEcho'] = False
- responseBody['Data'] = responseData
-
- json_responseBody = json.dumps(responseBody)
- log.debug("Response body:\n" + json_responseBody)
-
- headers = {
- 'content-type': '',
- 'content-length': str(len(json_responseBody))
- }
-
- try:
- response = requests.put(responseUrl,
- data=json_responseBody,
- headers=headers,
- timeout=600)
- log.debug("Status code: " + response.reason)
-
- except Exception as error:
- log.error("[send_response] Failed executing requests.put(..)")
- log.error(str(error))
-
- log.debug("[send_response] End")
+from lib.cfn_response import send_response
+from lib.logging_util import set_log_level
# ======================================================================================================================
# Lambda Entry Point
# ======================================================================================================================
def lambda_handler(event, context):
- log = logging.getLogger()
+ log = set_log_level()
log.info('[lambda_handler] Start')
- responseStatus = 'SUCCESS'
+ response_status = 'SUCCESS'
reason = None
- responseData = {}
+ response_data = {}
result = {
'StatusCode': '200',
'Body': {'message': 'success'}
}
try:
- log_level = str(os.getenv('LOG_LEVEL').upper())
- if log_level not in ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']:
- log_level = 'ERROR'
- log.setLevel(log_level)
count = 3
SECONDS = os.getenv('SECONDS')
if (SECONDS != None):
@@ -87,7 +42,7 @@ def lambda_handler(event, context):
log.info(count)
except Exception as error:
log.error(str(error))
- responseStatus = 'FAILED'
+ response_status = 'FAILED'
reason = str(error)
result = {
'statusCode': '400',
@@ -96,8 +51,8 @@ def lambda_handler(event, context):
finally:
log.info('[lambda_handler] End')
if 'ResponseURL' in event:
- resourceId = event['PhysicalResourceId'] if 'PhysicalResourceId' in event else event['LogicalResourceId']
- log.info("ResourceId %s", resourceId)
- send_response(log, event, context, responseStatus, responseData, resourceId, reason)
+ resource_id = event['PhysicalResourceId'] if 'PhysicalResourceId' in event else event['LogicalResourceId']
+ log.info("ResourceId %s", resource_id)
+ send_response(log, event, context, response_status, response_data, resource_id, reason)
return json.dumps(result)