You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 1, 2024. It is now read-only.
Add insights section to our "View experiment page".
Examine PQC affect on our metrics: Check correlation between message size, number of iterations and PQC algorithms.
We would like to use genAI to help with this task:
Run a POC with GenAI to analyze the raw data and provide insights (prompt engineering).
Populate our insights to each of our official benchmarking report.
Research questions to address with GenAI prompt
Analysis should be able to answer the following questions for our users and ourselves:
What is the CPU / Memory usage, error rate, bytes throughput, requests count throughput, TLS handshake time between different algorithms (PQ / Hybrid / Classic)?
Can we see an exponential rise or anomalies in PQ/Hybrid vs classic algorithms when increasing the number of iterations?
Can we see an exponential rise or anomalies in PQ/Hybrid vs classic algorithms when increasing the message size?
Can we see a substantial effect of PQ/Hybrid algorithms on the metrics that we selected for examination?
Tasks
Add new field to our experiment (AKA test suite) JSON called insights.
UI - Conditional section: Display the insights in View experiment page if the insights property is populated.
POC - GenAI analyzing the JSON results - compared to manual analysis, per previous tasks
Create a prompt to analyze the experiment results JSON (See latest prompt in comments)
Manually add the results to our official benchmarking reports under insights property. Make sure to review the genAI generated insights and modify as needed.
Out of scope: automate the genAI insights generation from an Azure instance of the latest gpt model after a run is executed.
The text was updated successfully, but these errors were encountered:
Note: this prompt was generated on January 1st, 2024. It will need to be enhanced when we have more metrics (throughput, error rate, handshake time and more) and when we add more parameters such as message size.
You are now a Quantum Cryptography Benchmarking Azure Lab and a Data Analyst.
I am currently designing and implementing a cloud-based architecture for a new post quantum cryptography related tool. The tool is supposed to give benchmarking reports for PQC algorithms when simulated in real world scenarios. I would like to set up a lab environment in Azure that would run tests that compare PQC algorithms to classic algorithms and hybrid algorithms, to find the impacts of the new PQC/hybrid algorithms on existing echo systems. We need to evaluate Classic algorithms (prime256v1 and secp384r1), Hybrid algorithms (p256_kyber512, p384_kyber768, x25519_kyber768) and Quantum Safe algorithms, including (bikel1, bikel3, kyber512, kyber768, kyber1024, frodo640aes, frodo640shake, frodo976aes, frodo976shake, frodo1344aes, frodo1344shake, hqc128, hqc192, hqc256).
I am seeking to explore the following research questions:
What is the CPU / Memory usage between different algorithms (PQ / Hybrid / Classic)?
Can we see an exponential rise or anomalies in PQ/Hybrid vs classic algorithms when increasing the number of iterations?
Can we see a substantial effect of PQ/Hybrid algorithms on the metrics that we selected for examination?
How much more CPU and Memory in percentage will I need on an app that implements hybrid and quantum-safe algorithms, compared to classic algorithms?
Given a JSON dataset of benchmarking results, could you help us analyze it in line with these research questions and provide any additional insights you might notice?
When styling your response, please put code brackets on technical terms and algorithm names.
Please analyze the data yourself and provide answers. Do not tell me to explore my data in Python or similar. Please ignore 0 values in the JSON. The units for CPU usage are in % and the units for memory are in MB.
litalmason
changed the title
GenAI - Data analysis
Data analysis (powered by genAI)
Feb 15, 2024
Description
The idea is to analyze our reports and to provide coherent insights on the data that we collected.
This is one of the main goals of this project.
Figma
Insights section "View more"
Insights section "View less"
Acceptance Criteria
We would like to use genAI to help with this task:
Run a POC with GenAI to analyze the raw data and provide insights (prompt engineering).
Research questions to address with GenAI prompt
Analysis should be able to answer the following questions for our users and ourselves:
Tasks
insights
.insights
property is populated.insights
property. Make sure to review the genAI generated insights and modify as needed.Out of scope: automate the genAI insights generation from an Azure instance of the latest gpt model after a run is executed.
The text was updated successfully, but these errors were encountered: