-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SOAR-18543] Palo Alto Cortex XDR #3040
base: develop
Are you sure you want to change the base?
Conversation
82083d8
to
6c7ef56
Compare
plugins/palo_alto_cortex_xdr/unit_test/expected/monitor_alerts_full_page_state.json.exp
Outdated
Show resolved
Hide resolved
dafd15d
to
d275283
Compare
d275283
to
a26e0fa
Compare
plugins/palo_alto_cortex_xdr/icon_palo_alto_cortex_xdr/tasks/monitor_alerts/task.py
Show resolved
Hide resolved
self.logger.info("Adjusting start time to cutoff value") | ||
start_time = max_lookback_unix | ||
# Reset search_from and search_to if this is not a backfill | ||
if not custom_config: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we've passed an alert limit into the custom config this could break this logic? is it better to use the lookback type key we've used in other cases?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're correct, we should enter this if there is an alert_limit present in the custom config but we're not currently doing a backfill.
plugins/palo_alto_cortex_xdr/icon_palo_alto_cortex_xdr/tasks/monitor_alerts/task.py
Outdated
Show resolved
Hide resolved
a26e0fa
to
0524e64
Compare
ad96ae3
to
cbd1a06
Compare
…hange how custom config is named in line with other plugins | Update SDK | Update error handling to return response data in data field
cbd1a06
to
8c4a548
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's such a big update so sorry if some of the questions aren't needed! just want to make sure I'm following everything and we're not using resources/time when we don't need to be
default_date_lookback = dt_now - timedelta(days=lookback_days) # if not passed from CPS create on the fly | ||
custom_lookback = custom_config.get(f"max_{LAST_ALERT_TIME}", {}) | ||
comparison_date = datetime(**custom_lookback) if custom_lookback else default_date_lookback | ||
now_date_time = self.convert_unix_to_datetime(now_unix) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we pass this into get_query_times
, we call this as soon as get_query_times
gets entered.
) | ||
lookback_values = [bool(custom_timings), custom_config.get("max_lookback_days"), bool(max_lookback_date_time)] | ||
if any(lookback_value for lookback_value in lookback_values): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can just do - if any(lookback_values)
and it iterates for us?
max_lookback_unix = self.convert_datetime_to_unix(max_lookback_date_time) | ||
if start_time < max_lookback_unix: | ||
self.logger.info( | ||
f"Start time of {self.convert_unix_to_datetime(start_time)} exceeds cutoff of {max_lookback_date_time}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we seem to be doing a lot of converting back and forward. would it be easier to follow keeping this as a datetime obj up until we then decide what start_time we want to keep? and then one conversion of datetime_obj -> unix?
) | ||
self.logger.info("Adjusting start time to cutoff value") | ||
start_time = max_lookback_unix | ||
# Reset search_from and search_to if this is not a backfill |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we're changing the state time at all, should we not reset the search_from and search_to?
state[QUERY_START_TIME] = query_values.QUERY_START_TIME | ||
state[QUERY_END_TIME] = query_values.QUERY_END_TIME | ||
state[LAST_SEARCH_FROM] = query_values.QUERY_SEARCH_FROM | ||
state[LAST_SEARCH_TO] = query_values.QUERY_SEARCH_TO |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we have an upper limit on this LAST_SEARCH_TO ? want to make sure we don't somehow page so high that we hit an offset limit on their API
|
||
# Create a new hash for every new alert | ||
for _, alert in enumerate(alerts): | ||
for alert in alerts: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in this loop, should we only add the new hash if it's not deduped? saves the amount of values being saved to the state and also how many are being checked against on the next run?
|
||
state[CURRENT_COUNT] = state.get(CURRENT_COUNT, 0) + results_count | ||
new_alerts, new_alert_hashes = self._dedupe_and_get_highest_time(results, state) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this isn't changing the logic, but do we think we need to dedupe when we're paging? if we're using offsets we could maybe speed up the task execution time by skipping the dedupe and hashing until we're on the first and last page?
Proposed Changes
Description
Describe the proposed changes:
PR Requirements
Developers, verify you have completed the following items by checking them off:
Testing
Unit Tests
Review our documentation on generating and writing plugin unit tests
In-Product Tests
If you are an InsightConnect customer or have access to an InsightConnect instance, the following in-product tests should be done:
Style
Review the style guide
USER nobody
in theDockerfile
when possiblerapid7/insightconnect-python-3-38-slim-plugin:{sdk-version-num}
andrapid7/insightconnect-python-3-38-plugin:{sdk-version-num}
insight-plugin validate
which callsicon_validate
to linthelp.md
Functional Checklist
tests/
directory created withinsight-plugin samples
tests/$action_bad.json
insight-plugin run -T tests/example.json --debug --jq
insight-plugin run -T all --debug --jq
(use PR format at end)insight-plugin run -R tests/example.json --debug --jq
insight-plugin run --debug --jq
(use PR format at end)Assessment
You must validate your work to reviewers:
insight-plugin validate
and make sure everything passesinsight-plugin run -A
. For single action validation:insight-plugin run tests/{file}.json -A
insight-plugin ... | pbcopy
) and paste the output in a new post on this PR