-
-
Notifications
You must be signed in to change notification settings - Fork 452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nuclei analyzer (Draft) Closes #1883 #2670
base: develop
Are you sure you want to change the base?
Nuclei analyzer (Draft) Closes #1883 #2670
Conversation
Signed-off-by: pranjalg1331 <[email protected]>
Signed-off-by: pranjalg1331 <[email protected]>
Signed-off-by: pranjalg1331 <[email protected]>
Signed-off-by: pranjalg1331 <[email protected]>
logger.debug(f"Running command: {' '.join(command)}") | ||
|
||
result = subprocess.run( | ||
command, |
Check failure
Code scanning / CodeQL
Uncontrolled command line Critical
user-provided value
success, result = run_nuclei_command(url, template_dirs) | ||
|
||
if success: | ||
return jsonify(result), 200 |
Check warning
Code scanning / CodeQL
Information exposure through an exception Medium
Stack trace information
if success: | ||
return jsonify(result), 200 | ||
else: | ||
return jsonify(result), 500 |
Check warning
Code scanning / CodeQL
Information exposure through an exception Medium
Stack trace information
{ | ||
"success": False, | ||
"error": "An unexpected error occurred", | ||
"details": str(e), | ||
} |
Check warning
Code scanning / CodeQL
Information exposure through an exception Medium
Stack trace information
@drosetti, Currently my Nuclei image process request synchronously. But I noticed in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the analyzer perspective I think it's not required to make the request async. It should works simply avoiding the polling, it will have the data in the first request and in case it takes too much time there is a timeout over each task. I think we can keep in sync.
@@ -0,0 +1,54 @@ | |||
FROM python:3.9-slim |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
python 3.9 will reach the end of life in oct 2025 please use an higher version
@@ -99,7 +99,7 @@ check_parameters "$@" && shift 2 | |||
load_env "docker/.env" | |||
current_version=${REACT_APP_INTELOWL_VERSION/"v"/""} | |||
|
|||
docker_analyzers=("pcap_analyzers" "tor_analyzers" "malware_tools_analyzers" "cyberchef" "phoneinfoga" "phishing_analyzers") | |||
docker_analyzers=("pcap_analyzers" "tor_analyzers" "malware_tools_analyzers" "cyberchef" "phoneinfoga" "phishing_analyzers" "nuclei_analyzer") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I think you should add an "echo" in the print_help function (line 19) and add the path mapping (line 12)
@drosetti, I think currently we are polling by default in all the docker-based integrations inside |
You're right, the polling is always made. I think the better solution is to change this analyzer: in this way all the docker-based analyzers have the same flow and this makes easier maintain them. |
@drosetti Another simpler approach would be to check if the task key is present in the result. If the key is not present, it means that the workflow is synchronous and we can directly pass the result. I believe it will also be more efficient than polling. Please suggest on which way should I move forward? |
Do you want to put a flag and based on it do a sync or async request ? I don't like it a lot. Another approc could be create two methods: one for sync and on for async and both of them use a function to do the first request, in this way we avoid to duplicate code and the async version also have the polling requests. How does it sound to you ? |
@drosetti, for now, I have implemented the API in asynchronous structure. I am just working on optimizing it to reduce the time it takes to analyze. Once completed I will push the changes. |
(Please add to the PR name the issue/s that this PR would close if merged by using a Github keyword. Example:
<feature name>. Closes #999
. If your PR is made by a single commit, please add that clause in the commit too. This is all required to automate the closure of related issues.)Description
This is a draft pr for Docker based analyzer for Nuclei. I have created a flask API around Nuclei (Cli tool) with the python
subprocess
module and have added Dockerfile and compose.yaml. Curently the image is being pulled from my dockerhub repository.Please review my approach so a can proceed with resolving the issue.
Closes #1883
Type of change
Please delete options that are not relevant.
Checklist
develop
dumpplugin
command and added it in the project as a data migration. ("How to share a plugin with the community")test_files.zip
and you added the default tests for that mimetype in test_classes.py.FREE_TO_USE_ANALYZERS
playbook by following this guide.url
that contains this information. This is required for Health Checks._monkeypatch()
was used in its class to apply the necessary decorators.MockUpResponse
of the_monkeypatch()
method. This serves us to provide a valid sample for testing.Black
,Flake
,Isort
) gave 0 errors. If you have correctly installed pre-commit, it does these checks and adjustments on your behalf.tests
folder). All the tests (new and old ones) gave 0 errors.DeepSource
,Django Doctors
or other third-party linters have triggered any alerts during the CI checks, I have solved those alerts.Important Rules