-
-
## Introduction
CD3 stands for Cloud Deployment Design Deliverable.
The CD3 Automation toolkit has been developed to help in automating the OCI resource object management.
@@ -80,7 +81,8 @@ The CD3 Automation toolkit has been developed to help in automating the OCI reso
It reads input data in the form of CD3 Excel sheet and generates Terraform files which can be used to provision the resources in OCI instead of handling the task through the OCI console manually. The toolkit also reverse engineers the components in OCI back to the Excel sheet and Terraform configuration. The toolkit can be used throughout the lifecycle of tenancy to continuously create or modify existing resources. The generated Terraform code can be used by the OCI Resource Manager or can be integrated into organization's existing devops CI/CD ecosystem.
-
+
+
@@ -98,11 +100,10 @@ It reads input data in the form of CD3 Excel sheet and generates Terraform files
| [Management Services](/cd3_automation_toolkit/documentation/user_guide/learn_more/CD3ExcelTabs.md#management-services) | Events, Notifications, Alarms, Service Connector Hub (SCH) |
| [Developer Services](/cd3_automation_toolkit/documentation/user_guide/learn_more/CD3ExcelTabs.md#developer-services) | Resource Manager, Oracle Kubernetes Engine (OKE) |
| [Logging Services](/cd3_automation_toolkit/documentation/user_guide/learn_more/CD3ExcelTabs.md#logging-Services) | VCN Flow Logs, LBaaS access and error Logs, OSS Buckets write Logs |
-| [SDDCs ](/cd3_automation_toolkit/documentation/user_guide/learn_more/CD3ExcelTabs.md#sddcs-tab) | Oracle Cloud VMWare Solutions |
+| [SDDCs ](/cd3_automation_toolkit/documentation/user_guide/learn_more/CD3ExcelTabs.md#sddcs-tab) | Oracle Cloud VMWare Solutions (Single Cluster is supported as of now. Multi-cluster support will be included in the upcoming release) |
| [CIS Landing Zone Compliance](/cd3_automation_toolkit/documentation/user_guide/learn_more/CISFeatures.md#additional-cis-compliance-features) | Download and Execute CIS Compliance Check Script, Cloud Guard, Key Vault, Budget |
[Policy Enforcement](/cd3_automation_toolkit/documentation/user_guide/learn_more/OPAForCompliance.md) | OPA - Open Policy Agent |
-
[Click here](/cd3_automation_toolkit/documentation/user_guide/prerequisites.md) to get started and manage your OCI Infra!
## Contributing
diff --git a/cd3_automation_toolkit/Compute/create_terraform_dedicatedhosts.py b/cd3_automation_toolkit/Compute/create_terraform_dedicatedhosts.py
index ac2d28712..c81368660 100644
--- a/cd3_automation_toolkit/Compute/create_terraform_dedicatedhosts.py
+++ b/cd3_automation_toolkit/Compute/create_terraform_dedicatedhosts.py
@@ -25,19 +25,16 @@
# If input is CD3 excel file
# Execution of the code begins here
-def create_terraform_dedicatedhosts(inputfile, outdir, service_dir,prefix, config):
+def create_terraform_dedicatedhosts(inputfile, outdir, service_dir,prefix, ct):
# Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
env = Environment(loader=file_loader, keep_trailing_newline=True, trim_blocks=True, lstrip_blocks=True)
template = env.get_template('dedicatedvmhosts-template')
filename = inputfile
- configFileName = config
sheetName = "DedicatedVMHosts"
auto_tfvars_filename = prefix + '_' + sheetName.lower() + '.auto.tfvars'
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
outfile = {}
oname = {}
diff --git a/cd3_automation_toolkit/Compute/create_terraform_instances.py b/cd3_automation_toolkit/Compute/create_terraform_instances.py
index 7905ac0ac..e5bbda2d7 100755
--- a/cd3_automation_toolkit/Compute/create_terraform_instances.py
+++ b/cd3_automation_toolkit/Compute/create_terraform_instances.py
@@ -22,18 +22,15 @@
# If input is CD3 excel file
# Execution of the code begins here
-def create_terraform_instances(inputfile, outdir, service_dir, prefix, config):
+def create_terraform_instances(inputfile, outdir, service_dir, prefix, ct):
boot_policy_tfStr = {}
tfStr = {}
ADS = ["AD1", "AD2", "AD3"]
filename = inputfile
- configFileName = config
sheetName = "Instances"
auto_tfvars_filename = prefix + '_' + sheetName.lower() + '.auto.tfvars'
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
# Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
@@ -153,7 +150,7 @@ def create_terraform_instances(inputfile, outdir, service_dir, prefix, config):
except Exception as e:
print("Invalid Subnet Name specified for row " + str(
i + 3) + ". It Doesnt exist in Subnets sheet. Exiting!!!")
- exit()
+ exit(1)
tempdict = {'network_compartment_id': commonTools.check_tf_variable(network_compartment_id),
'vcn_name': vcn_name,
diff --git a/cd3_automation_toolkit/Compute/export_dedicatedvmhosts_nonGreenField.py b/cd3_automation_toolkit/Compute/export_dedicatedvmhosts_nonGreenField.py
index 2f5652219..41e882645 100644
--- a/cd3_automation_toolkit/Compute/export_dedicatedvmhosts_nonGreenField.py
+++ b/cd3_automation_toolkit/Compute/export_dedicatedvmhosts_nonGreenField.py
@@ -44,14 +44,12 @@ def print_dedicatedvmhosts(region, dedicatedvmhost, values_for_column, ntk_compa
values_for_column = commonTools.export_extra_columns(oci_objs, col_header, sheet_dict, values_for_column)
# Execution of the code begins here
-def export_dedicatedvmhosts(inputfile, _outdir, service_dir, _config, ct, export_compartments=[], export_regions=[]):
+def export_dedicatedvmhosts(inputfile, outdir, service_dir, config, signer, ct, export_compartments=[], export_regions=[]):
global tf_import_cmd
global sheet_dict
global importCommands
- global config
global cd3file
global reg
- global outdir
global values_for_column
@@ -60,16 +58,7 @@ def export_dedicatedvmhosts(inputfile, _outdir, service_dir, _config, ct, export
print("\nAcceptable cd3 format: .xlsx")
exit()
-
- outdir = _outdir
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
-
sheetName="DedicatedVMHosts"
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'],"root",configFileName)
# Read CD3
df, values_for_column= commonTools.read_cd3(cd3file,sheetName)
@@ -100,7 +89,7 @@ def export_dedicatedvmhosts(inputfile, _outdir, service_dir, _config, ct, export
config.__setitem__("region", ct.region_dict[reg])
region = reg.capitalize()
- compute_client = oci.core.ComputeClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ compute_client = oci.core.ComputeClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY, signer=signer)
for ntk_compartment_name in export_compartments:
dedicatedvmhosts = oci.pagination.list_call_get_all_results(compute_client.list_dedicated_vm_hosts,compartment_id=ct.ntk_compartment_ids[ntk_compartment_name], lifecycle_state="ACTIVE")
@@ -108,6 +97,12 @@ def export_dedicatedvmhosts(inputfile, _outdir, service_dir, _config, ct, export
dedicatedvmhost=compute_client.get_dedicated_vm_host(dedicatedvmhost.id).data
print_dedicatedvmhosts(region, dedicatedvmhost,values_for_column, ntk_compartment_name)
+ # write data into file
+ for reg in export_regions:
+ script_file = f'{outdir}/{reg}/{service_dir}/'+file_name
+ with open(script_file, 'a') as importCommands[reg]:
+ importCommands[reg].write('\n\nterraform plan\n')
+
commonTools.write_to_cd3(values_for_column, cd3file, "DedicatedVMHosts")
print("Dedicated VM Hosts exported to CD3\n")
diff --git a/cd3_automation_toolkit/Compute/export_instances_nonGreenField.py b/cd3_automation_toolkit/Compute/export_instances_nonGreenField.py
index 9c5849bb3..7f506b502 100644
--- a/cd3_automation_toolkit/Compute/export_instances_nonGreenField.py
+++ b/cd3_automation_toolkit/Compute/export_instances_nonGreenField.py
@@ -71,8 +71,8 @@ def adding_columns_values(region, ad, fd, vs, publicip, privateip, os_dname, sha
values_for_column_instances)
-def find_vnic(ins_id, config, compartment_id):
- compute = oci.core.ComputeClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+def find_vnic(ins_id, compartment_id):
+ compute = oci.core.ComputeClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
#for comp in all_compartments:
net = oci.pagination.list_call_get_all_results(compute.list_vnic_attachments, compartment_id=compartment_id,
instance_id=ins_id)
@@ -80,11 +80,11 @@ def find_vnic(ins_id, config, compartment_id):
return net
-def __get_instances_info(compartment_name, compartment_id, reg_name, config, display_names, ad_names, ct):
+def __get_instances_info(compartment_name, compartment_id, reg_name, display_names, ad_names, ct):
config.__setitem__("region", ct.region_dict[reg_name])
- compute = oci.core.ComputeClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- network = oci.core.VirtualNetworkClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- bc = oci.core.BlockstorageClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ compute = oci.core.ComputeClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ network = oci.core.VirtualNetworkClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ bc = oci.core.BlockstorageClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
instance_info = oci.pagination.list_call_get_all_results(compute.list_instances, compartment_id=compartment_id)
# print(instance_info.data)
@@ -169,7 +169,7 @@ def __get_instances_info(compartment_name, compartment_id, reg_name, config, dis
cpcn = comp_name
# VNIC Details
- ins_vnic = find_vnic(ins_id, config, compartment_id)
+ ins_vnic = find_vnic(ins_id, compartment_id)
vnic_info=None
for lnic in ins_vnic.data:
# print(lnic)
@@ -264,17 +264,16 @@ def __get_instances_info(compartment_name, compartment_id, reg_name, config, dis
# Execution of the code begins here
-def export_instances(inputfile, outdir, service_dir,config,ct, export_compartments=[], export_regions=[],display_names=[],ad_names=[]):
+def export_instances(inputfile, outdir, service_dir,config1, signer1, ct, export_compartments=[], export_regions=[],display_names=[],ad_names=[]):
cd3file = inputfile
if ('.xls' not in cd3file):
print("\nAcceptable cd3 format: .xlsx")
exit()
- configFileName = config
- config = oci.config.from_file(file_location=configFileName)
-
- global instance_keys, user_data_in, os_keys, importCommands, idc, rows, AD, values_for_column_instances, df, sheet_dict_instances # declaring global variables
+ global instance_keys, user_data_in, os_keys, importCommands, idc, rows, AD, values_for_column_instances, df, sheet_dict_instances, config, signer # declaring global variables
+ config=config1
+ signer=signer1
instance_keys = {} # dict name
os_keys = {} # os_ocids
@@ -284,11 +283,6 @@ def export_instances(inputfile, outdir, service_dir,config,ct, export_compartmen
importCommands = {}
rows = []
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'], "root", configFileName)
-
AD = lambda ad: "AD1" if ("AD-1" in ad or "ad-1" in ad) else ("AD2" if ("AD-2" in ad or "ad-2" in ad) else ("AD3" if ("AD-3" in ad or "ad-3" in ad) else " NULL")) # Get shortend AD
df, values_for_column_instances = commonTools.read_cd3(cd3file, sheetName)
@@ -313,7 +307,7 @@ def export_instances(inputfile, outdir, service_dir,config,ct, export_compartmen
importCommands[reg].write("\n\n######### Writing import for Instances #########\n\n")
config.__setitem__("region", ct.region_dict[reg])
for ntk_compartment_name in export_compartments:
- __get_instances_info(ntk_compartment_name, ct.ntk_compartment_ids[ntk_compartment_name], reg, config, display_names, ad_names,ct)
+ __get_instances_info(ntk_compartment_name, ct.ntk_compartment_ids[ntk_compartment_name], reg, display_names, ad_names,ct)
# writing image ocids and SSH keys into variables file
var_data = {}
@@ -367,7 +361,7 @@ def export_instances(inputfile, outdir, service_dir,config,ct, export_compartmen
# write data into file
for reg in export_regions:
- script_file = f'{outdir}/{reg}/{service_dir}/tf_import_commands_instances_nonGF.sh'
+ script_file = f'{outdir}/{reg}/{service_dir}/' + file_name
with open(script_file, 'a') as importCommands[reg]:
importCommands[reg].write('\n\nterraform plan\n')
diff --git a/cd3_automation_toolkit/Database/create_terraform_adb.py b/cd3_automation_toolkit/Database/create_terraform_adb.py
index 3c4ffab1e..71181a4a6 100644
--- a/cd3_automation_toolkit/Database/create_terraform_adb.py
+++ b/cd3_automation_toolkit/Database/create_terraform_adb.py
@@ -19,16 +19,12 @@
# Required Inputs- CD3 excel file, Config file, prefix AND outdir
######
# Execution of the code begins here
-def create_terraform_adb(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_adb(inputfile, outdir, service_dir, prefix, ct):
filename = inputfile
- configFileName = config
sheetName = "ADB"
auto_tfvars_filename = '_' + sheetName.lower() + '.auto.tfvars'
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
-
outfile = {}
oname = {}
tfStr = {}
@@ -81,7 +77,7 @@ def create_terraform_adb(inputfile, outdir, service_dir, prefix, config=DEFAULT_
str(df.loc[i, 'DB Name']).lower() == 'nan':
print("\nRegion, Compartment Name, CPU Core Count, Data Storage Size in TB and DB Name fields are mandatory. Please enter a value and try again !!")
print("\n** Exiting **")
- exit()
+ exit(1)
for columnname in dfcolumns:
# Column value
@@ -148,7 +144,7 @@ def create_terraform_adb(inputfile, outdir, service_dir, prefix, config=DEFAULT_
except Exception as e:
print("Invalid Subnet Name specified for row " + str(
i + 3) + ". It Doesnt exist in Subnets sheet. Exiting!!!")
- exit()
+ exit(1)
else:
subnet_id = ""
vcn_name = ""
diff --git a/cd3_automation_toolkit/Database/create_terraform_dbsystems_vm_bm.py b/cd3_automation_toolkit/Database/create_terraform_dbsystems_vm_bm.py
index d3fb07dc6..7b4a0f481 100644
--- a/cd3_automation_toolkit/Database/create_terraform_dbsystems_vm_bm.py
+++ b/cd3_automation_toolkit/Database/create_terraform_dbsystems_vm_bm.py
@@ -20,14 +20,11 @@
# Required Inputs- CD3 excel file, Config file, prefix AND outdir
######
# Execution of the code begins here
-def create_terraform_dbsystems_vm_bm(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_dbsystems_vm_bm(inputfile, outdir, service_dir, prefix, ct):
filename = inputfile
- configFileName = config
sheetName = "DBSystems-VM-BM"
auto_tfvars_filename = '_' + sheetName.lower() + '.auto.tfvars'
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
outfile = {}
oname = {}
@@ -112,13 +109,13 @@ def create_terraform_dbsystems_vm_bm(inputfile, outdir, service_dir, prefix, con
str(df.loc[i, 'Hostname Prefix']).lower() == 'nan' or \
str(df.loc[i, 'Shape']).lower() == 'nan' :
print("\nRegion, Compartment Name, Availability Domain(AD1|AD2|AD3), SSH Key Var Name, Subnet Name, Hostname, Shape are mandatory fields. Please enter a value and try again.......Exiting!!")
- exit()
+ exit(1)
if str(df.loc[i, 'DB Name']).lower() == 'nan' or \
str(df.loc[i, 'DB Version']).lower() == 'nan' or \
str(df.loc[i, 'Database Edition']).lower() == 'nan' or \
str(df.loc[i, 'DB Admin Password']).lower() == 'nan':
print("\nDB Name, DB Version, Database Edition, DB Admin Password are mandatory fields. Please enter a value and try again.......Exiting!!")
- exit()
+ exit(1)
for columnname in dfcolumns:
# Column value
@@ -161,7 +158,7 @@ def create_terraform_dbsystems_vm_bm(inputfile, outdir, service_dir, prefix, con
except Exception as e:
print("Invalid Subnet Name specified for row " + str(
i + 3) + ". It Doesnt exist in Subnets sheet. Exiting!!!")
- exit()
+ exit(1)
tempdict = {'network_compartment_id': commonTools.check_tf_variable(network_compartment_id),
'vcn_name': vcn_name,
diff --git a/cd3_automation_toolkit/Database/create_terraform_exa_infra.py b/cd3_automation_toolkit/Database/create_terraform_exa_infra.py
index b2a3e8cea..a2deb9963 100644
--- a/cd3_automation_toolkit/Database/create_terraform_exa_infra.py
+++ b/cd3_automation_toolkit/Database/create_terraform_exa_infra.py
@@ -19,14 +19,11 @@
# Required Inputs- CD3 excel file, Config file, prefix AND outdir
######
# Execution of the code begins here
-def create_terraform_exa_infra(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_exa_infra(inputfile, outdir, service_dir, prefix, ct):
filename = inputfile
- configFileName = config
sheetName = "EXA-Infra"
auto_tfvars_filename = '_' + sheetName.lower() + '.auto.tfvars'
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
outfile = {}
oname = {}
@@ -85,7 +82,7 @@ def create_terraform_exa_infra(inputfile, outdir, service_dir, prefix, config=DE
str(df.loc[i, 'Availability Domain(AD1|AD2|AD3)']).lower() == 'nan' or \
str(df.loc[i, 'Shape']).lower() == 'nan':
print("\nRegion, Compartment Name, Availability Domain(AD1|AD2|AD3), Shape are mandatory fields. Please enter a value and try again.......Exiting!!")
- exit()
+ exit(1)
#tempdict = {'oracle_db_software_edition' : 'ENTERPRISE_EDITION_EXTREME_PERFORMANCE'}
diff --git a/cd3_automation_toolkit/Database/create_terraform_exa_vmclusters.py b/cd3_automation_toolkit/Database/create_terraform_exa_vmclusters.py
index 6b6fff019..d234e5a61 100644
--- a/cd3_automation_toolkit/Database/create_terraform_exa_vmclusters.py
+++ b/cd3_automation_toolkit/Database/create_terraform_exa_vmclusters.py
@@ -19,14 +19,11 @@
# Required Inputs- CD3 excel file, Config file, prefix AND outdir
######
# Execution of the code begins here
-def create_terraform_exa_vmclusters(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_exa_vmclusters(inputfile, outdir, service_dir, prefix, ct):
filename = inputfile
- configFileName = config
sheetName = "EXA-VMClusters"
auto_tfvars_filename = '_' + sheetName.lower() + '.auto.tfvars'
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
outfile = {}
oname = {}
@@ -95,7 +92,7 @@ def create_terraform_exa_vmclusters(inputfile, outdir, service_dir, prefix, conf
str(df.loc[i, 'Hostname Prefix']).lower() == 'nan' or \
str(df.loc[i, 'Oracle Grid Infrastructure Version']).lower() == 'nan':
print("\nRegion, Compartment Name, Exadata Infra Display Name, VM Cluster Display Name, Subnet Names, CPU Core Count, Hostname Prefix, Oracle Grid Infrastructure Version, SSH Key Var Name are mandatory fields. Please enter a value and try again.......Exiting!!")
- exit()
+ exit(1)
# tempdict = {'oracle_db_software_edition' : 'ENTERPRISE_EDITION_EXTREME_PERFORMANCE'}
@@ -145,7 +142,7 @@ def create_terraform_exa_vmclusters(inputfile, outdir, service_dir, prefix, conf
except Exception as e:
print("Invalid Subnet Name specified for row " + str(
i + 3) + ". It Doesnt exist in Subnets sheet. Exiting!!!")
- exit()
+ exit(1)
tempdict = {'network_compartment_id': commonTools.check_tf_variable(network_compartment_id),
'vcn_name': vcn_name,
@@ -166,7 +163,7 @@ def create_terraform_exa_vmclusters(inputfile, outdir, service_dir, prefix, conf
except Exception as e:
print("Invalid Subnet Name specified for row " + str(
i + 3) + ". It Doesnt exist in Subnets sheet. Exiting!!!")
- exit()
+ exit(1)
tempdict = {'backup_subnet_name': subnet_id}
diff --git a/cd3_automation_toolkit/Database/export_adb_nonGreenField.py b/cd3_automation_toolkit/Database/export_adb_nonGreenField.py
index dcc2d2434..d556b88b9 100644
--- a/cd3_automation_toolkit/Database/export_adb_nonGreenField.py
+++ b/cd3_automation_toolkit/Database/export_adb_nonGreenField.py
@@ -88,14 +88,12 @@ def print_adbs(region, vnc_client, adb, values_for_column, ntk_compartment_name)
values_for_column = commonTools.export_extra_columns(oci_objs, col_header, sheet_dict, values_for_column)
# Execution of the code begins here
-def export_adbs(inputfile, _outdir, service_dir, ct, _config=DEFAULT_LOCATION, export_compartments=[],export_regions=[]):
+def export_adbs(inputfile, outdir, service_dir, config, signer, ct, export_compartments=[],export_regions=[]):
global tf_import_cmd
global sheet_dict
global importCommands
- global config
global cd3file
global reg
- global outdir
global values_for_column
cd3file = inputfile # input file
@@ -103,15 +101,7 @@ def export_adbs(inputfile, _outdir, service_dir, ct, _config=DEFAULT_LOCATION, e
print("\nAcceptable cd3 format: .xlsx")
exit()
- outdir = _outdir
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
-
sheetName = "ADB"
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'], "root", configFileName)
# Read CD3
df, values_for_column = commonTools.read_cd3(cd3file, sheetName)
@@ -144,8 +134,8 @@ def export_adbs(inputfile, _outdir, service_dir, ct, _config=DEFAULT_LOCATION, e
config.__setitem__("region", ct.region_dict[reg])
region = reg.capitalize()
- adb_client = oci.database.DatabaseClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- vnc_client = oci.core.VirtualNetworkClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ adb_client = oci.database.DatabaseClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY, signer=signer)
+ vnc_client = oci.core.VirtualNetworkClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY, signer=signer)
#adbs = {}
for ntk_compartment_name in export_compartments:
@@ -155,6 +145,11 @@ def export_adbs(inputfile, _outdir, service_dir, ct, _config=DEFAULT_LOCATION, e
adb = adb_client.get_autonomous_database(adb.id).data
print_adbs(region, vnc_client, adb, values_for_column, ntk_compartment_name)
+ for reg in export_regions:
+ script_file = f'{outdir}/{reg}/{service_dir}/' + file_name
+ with open(script_file, 'a') as importCommands[reg]:
+ importCommands[reg].write('\n\nterraform plan\n')
+
commonTools.write_to_cd3(values_for_column, cd3file, "ADB")
print("ADBs exported to CD3\n")
diff --git a/cd3_automation_toolkit/Database/export_dbsystems_vm_bm_nonGreenField.py b/cd3_automation_toolkit/Database/export_dbsystems_vm_bm_nonGreenField.py
index 937a44aa6..24452c2ce 100644
--- a/cd3_automation_toolkit/Database/export_dbsystems_vm_bm_nonGreenField.py
+++ b/cd3_automation_toolkit/Database/export_dbsystems_vm_bm_nonGreenField.py
@@ -96,14 +96,12 @@ def print_dbsystem_vm_bm(region, db_system_vm_bm, count,db_home, database ,vnc_c
# Execution of the code begins here
-def export_dbsystems_vm_bm(inputfile, _outdir, service_dir, _config, ct, export_compartments=[], export_regions=[]):
+def export_dbsystems_vm_bm(inputfile, outdir, service_dir, config, signer, ct, export_compartments=[], export_regions=[]):
global tf_import_cmd
global sheet_dict
global importCommands
- global config
global cd3file
global reg
- global outdir
global values_for_column
@@ -113,16 +111,7 @@ def export_dbsystems_vm_bm(inputfile, _outdir, service_dir, _config, ct, export_
exit()
- outdir = _outdir
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
-
sheetName = 'DBSystems-VM-BM'
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'],"root",configFileName)
-
var_data = {}
# Read CD3
@@ -160,8 +149,8 @@ def export_dbsystems_vm_bm(inputfile, _outdir, service_dir, _config, ct, export_
config.__setitem__("region", ct.region_dict[reg])
region = reg.capitalize()
- db_client = oci.database.DatabaseClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- vnc_client = oci.core.VirtualNetworkClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ db_client = oci.database.DatabaseClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ vnc_client = oci.core.VirtualNetworkClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
db = {}
for ntk_compartment_name in export_compartments:
diff --git a/cd3_automation_toolkit/Database/export_exa_infra_nonGreenField.py b/cd3_automation_toolkit/Database/export_exa_infra_nonGreenField.py
index e57958a87..22b72a2f7 100644
--- a/cd3_automation_toolkit/Database/export_exa_infra_nonGreenField.py
+++ b/cd3_automation_toolkit/Database/export_exa_infra_nonGreenField.py
@@ -47,14 +47,12 @@ def print_exa_infra(region, exa_infra, values_for_column, ntk_compartment_name):
# Execution of the code begins here
-def export_exa_infra(inputfile, _outdir, service_dir, _config, ct, export_compartments=[], export_regions=[]):
+def export_exa_infra(inputfile, outdir, service_dir, config, signer, ct, export_compartments=[], export_regions=[]):
global tf_import_cmd
global sheet_dict
global importCommands
- global config
global cd3file
global reg
- global outdir
global values_for_column
@@ -64,17 +62,7 @@ def export_exa_infra(inputfile, _outdir, service_dir, _config, ct, export_compar
exit()
- outdir = _outdir
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
-
sheetName = "EXA-Infra"
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'],"root",configFileName)
-
- # Read CD3
df, values_for_column= commonTools.read_cd3(cd3file,sheetName)
# Get dict for columns from Excel_Columns
@@ -104,13 +92,17 @@ def export_exa_infra(inputfile, _outdir, service_dir, _config, ct, export_compar
config.__setitem__("region", ct.region_dict[reg])
region = reg.capitalize()
- db_client = oci.database.DatabaseClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ db_client = oci.database.DatabaseClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY, signer=signer)
for ntk_compartment_name in export_compartments:
exa_infras = oci.pagination.list_call_get_all_results(db_client.list_cloud_exadata_infrastructures,compartment_id=ct.ntk_compartment_ids[ntk_compartment_name], lifecycle_state="AVAILABLE")
for exa_infra in exa_infras.data:
print_exa_infra(region, exa_infra,values_for_column, ntk_compartment_name)
+ for reg in export_regions:
+ script_file = f'{outdir}/{reg}/{service_dir}/' + file_name
+ with open(script_file, 'a') as importCommands[reg]:
+ importCommands[reg].write('\n\nterraform plan\n')
commonTools.write_to_cd3(values_for_column, cd3file, sheetName)
print("Exadata Infra exported to CD3\n")
diff --git a/cd3_automation_toolkit/Database/export_exa_vmclusters_nonGreenField.py b/cd3_automation_toolkit/Database/export_exa_vmclusters_nonGreenField.py
index 0befda84c..bfab06282 100644
--- a/cd3_automation_toolkit/Database/export_exa_vmclusters_nonGreenField.py
+++ b/cd3_automation_toolkit/Database/export_exa_vmclusters_nonGreenField.py
@@ -85,14 +85,12 @@ def print_exa_vmcluster(region, vnc_client,exa_infra, exa_vmcluster, key_name,va
# Execution of the code begins here
-def export_exa_vmclusters(inputfile, _outdir, service_dir, _config, ct, export_compartments=[],export_regions=[]):
+def export_exa_vmclusters(inputfile, outdir, service_dir, config, signer, ct, export_compartments=[],export_regions=[]):
global tf_import_cmd
global sheet_dict
global importCommands
- global config
global cd3file
global reg
- global outdir
global values_for_column
@@ -102,16 +100,8 @@ def export_exa_vmclusters(inputfile, _outdir, service_dir, _config, ct, export_c
exit()
- outdir = _outdir
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
-
sheetName = 'EXA-VMClusters'
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'],"root",configFileName)
-
+
var_data ={}
# Read CD3
@@ -150,8 +140,8 @@ def export_exa_vmclusters(inputfile, _outdir, service_dir, _config, ct, export_c
config.__setitem__("region", ct.region_dict[reg])
region = reg.capitalize()
- db_client = oci.database.DatabaseClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- vnc_client = oci.core.VirtualNetworkClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ db_client = oci.database.DatabaseClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY, signer=signer)
+ vnc_client = oci.core.VirtualNetworkClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY, signer=signer)
db={}
for ntk_compartment_name in export_compartments:
diff --git a/cd3_automation_toolkit/DeveloperServices/OKE/create_terraform_oke.py b/cd3_automation_toolkit/DeveloperServices/OKE/create_terraform_oke.py
index 4306e7f03..903870ac5 100644
--- a/cd3_automation_toolkit/DeveloperServices/OKE/create_terraform_oke.py
+++ b/cd3_automation_toolkit/DeveloperServices/OKE/create_terraform_oke.py
@@ -20,7 +20,7 @@
# Required Inputs-CD3 excel file, Config file AND outdir
######
# Execution of the code begins here
-def create_terraform_oke(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_oke(inputfile, outdir, service_dir, prefix, ct):
# Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
env = Environment(loader=file_loader, keep_trailing_newline=True)
@@ -32,13 +32,9 @@ def create_terraform_oke(inputfile, outdir, service_dir, prefix, config=DEFAULT_
ADS = ["AD1", "AD2", "AD3"]
filename = inputfile
- configFileName = config
cluster_str = {}
node_str = {}
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
-
# Read cd3 using pandas dataframe
df, col_headers = commonTools.read_cd3(filename, sheetName)
df = df.dropna(how='all')
@@ -81,7 +77,7 @@ def create_terraform_oke(inputfile, outdir, service_dir, prefix, config=DEFAULT_
if region not in ct.all_regions:
print("\nInvalid Region; It should be one of the values mentioned in VCN Info tab...Exiting!!")
- exit()
+ exit(1)
display_name = str(df.loc[i, 'CompartmentName&Node Pool Name'])
shapeField = str(df.loc[i, 'Shape'])
@@ -119,7 +115,7 @@ def create_terraform_oke(inputfile, outdir, service_dir, prefix, config=DEFAULT_
"\nRegion, Compartment Name, Cluster Name, Network Type, Cluster Kubernetes Version, Pod Security Policies, Load Balancer Subnets, API Endpoint Subnet fields are mandatory. Please enter a value and try again !!\n\nPlease fix it for row : {}".format(
i + 3))
print("\n** Exiting **")
- exit()
+ exit(1)
if str(df.loc[i, 'CompartmentName&Node Pool Name']).lower() != 'nan':
if str(df.loc[i, 'Nodepool Kubernetes Version']).lower() == 'nan' or \
str(df.loc[i, 'Shape']).lower() == 'nan' or \
@@ -130,13 +126,13 @@ def create_terraform_oke(inputfile, outdir, service_dir, prefix, config=DEFAULT_
print(
"\nCompartmentName&Node Pool Name, Nodepool Kubernetes Version, Shape, Source Details, Number of Nodes, Worker Node Subnet and Availability Domain(AD1|AD2|AD3) fields are mandatory. \n\nPlease fix it for row : {} and try again.".format(i+3))
print("\n** Exiting **")
- exit()
+ exit(1)
'''
if str(df.loc[i, 'Network Type']).lower() == 'oci_vcn_ip_native':
if str(df.loc[i, 'Pod Communication Subnet']).lower() == 'nan':
print("\nPod Communication Subnet required for cluster with networking type:OCI_VCN_IP_NATIVE")
print("\n** Exiting **")
- exit()
+ exit(1)
'''
# Fetch data; loop through columns
for columnname in dfcolumns:
@@ -220,7 +216,7 @@ def create_terraform_oke(inputfile, outdir, service_dir, prefix, config=DEFAULT_
oke_lb_subnets_list.append(subnets.vcn_subnet_map[key][2])
except Exception as e:
print("Invalid Subnet Name specified for row {} and column \"{}\". It Doesnt exist in Subnets sheet. Exiting!!!".format(i+3,columnname))
- exit()
+ exit(1)
tempdict = {'network_compartment_tf_name': network_compartment_id, 'vcn_name': vcn_name,'oke_lb_subnets': json.dumps(oke_lb_subnets_list)}
elif len(oke_lb_subnets) > 1:
for subnet in oke_lb_subnets:
@@ -235,7 +231,7 @@ def create_terraform_oke(inputfile, outdir, service_dir, prefix, config=DEFAULT_
oke_lb_subnets_list.append(subnets.vcn_subnet_map[key][2])
except Exception as e:
print("Invalid Subnet Name specified for row {} and column \"{}\". It Doesnt exist in Subnets sheet. Exiting!!!".format(i+3,columnname))
- exit()
+ exit(1)
tempdict = {'network_compartment_tf_name': network_compartment_id, 'vcn_tf_name': vcn_name,'oke_lb_subnets': json.dumps(oke_lb_subnets_list) }
if columnname == 'API Endpoint Subnet':
@@ -251,11 +247,11 @@ def create_terraform_oke(inputfile, outdir, service_dir, prefix, config=DEFAULT_
api_endpoint_subnet = subnets.vcn_subnet_map[key][2]
except Exception as e:
print("Invalid Subnet Name specified for row {} and column \"{}\". It Doesnt exist in Subnets sheet. Exiting!!!".format(i+3,columnname))
- exit()
+ exit(1)
tempdict = {'api_endpoint_subnet': api_endpoint_subnet}
elif len(subnet_tf_name) > 1:
print("Invalid Subnet Values for row {} and column \"{}\". Only one subnet allowed".format(i+3,columnname))
- exit()
+ exit(1)
if columnname == 'Worker Node Subnet':
subnet_tf_name = str(columnvalue).strip().split()
@@ -271,13 +267,13 @@ def create_terraform_oke(inputfile, outdir, service_dir, prefix, config=DEFAULT_
worker_node_subnet = subnets.vcn_subnet_map[key][2]
except Exception as e:
print("Invalid Subnet Name specified for row {} and column \"{}\". It Doesnt exist in Subnets sheet. Exiting!!!".format(i+3,columnname))
- exit()
+ exit(1)
else:
worker_node_subnet = ""
tempdict = {'worker_node_subnet': worker_node_subnet}
elif len(subnet_tf_name) > 1:
print("Invalid Subnet Values for row {} and column \"{}\". Only one subnet allowed".format(i+3,columnname))
- exit()
+ exit(1)
if columnname == 'Pod Communication Subnet':
subnet_tf_name = columnvalue.strip()
@@ -290,7 +286,7 @@ def create_terraform_oke(inputfile, outdir, service_dir, prefix, config=DEFAULT_
pod_communication_subnet = subnets.vcn_subnet_map[key][2]
except Exception as e:
print("Invalid Subnet Name specified for row {} and column \"{}\". It Doesnt exist in Subnets sheet. Exiting!!!".format(i+3,columnname))
- exit()
+ exit(1)
else:
pod_communication_subnet = ""
tempdict = {'pod_communication_subnet': pod_communication_subnet}
diff --git a/cd3_automation_toolkit/DeveloperServices/OKE/export_oke_nonGreenField.py b/cd3_automation_toolkit/DeveloperServices/OKE/export_oke_nonGreenField.py
index 312dc40f6..7c5b0b3c8 100644
--- a/cd3_automation_toolkit/DeveloperServices/OKE/export_oke_nonGreenField.py
+++ b/cd3_automation_toolkit/DeveloperServices/OKE/export_oke_nonGreenField.py
@@ -259,31 +259,23 @@ def print_oke(values_for_column_oke, reg, compartment_name, compartment_name_nod
values_for_column_oke = commonTools.export_extra_columns(oci_objs, col_header, sheet_dict_oke,values_for_column_oke)
# Execution of the code begins here
-def export_oke(inputfile, outdir,service_dir, ct, _config=DEFAULT_LOCATION, export_compartments=[], export_regions=[]):
+def export_oke(inputfile, outdir,service_dir, config, signer, ct, export_compartments=[], export_regions=[]):
global importCommands
global tf_import_cmd
global values_for_column_oke
global sheet_dict_oke
- global config
cd3file = inputfile
if ('.xls' not in cd3file):
print("\nAcceptable cd3 format: .xlsx")
exit()
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
-
sheetName = "OKE"
resource = 'tf_import_' + sheetName.lower()
file_name = 'tf_import_commands_' + sheetName.lower() + '_nonGF.sh'
importCommands={}
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'], "root", configFileName)
df, values_for_column_oke = commonTools.read_cd3(cd3file, "OKE")
@@ -311,8 +303,8 @@ def export_oke(inputfile, outdir,service_dir, ct, _config=DEFAULT_LOCATION, expo
for reg in export_regions:
importCommands[reg].write("\n\n######### Writing import for OKE Objects #########\n\n")
config.__setitem__("region", ct.region_dict[reg])
- oke = ContainerEngineClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- network = VirtualNetworkClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ oke = ContainerEngineClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ network = VirtualNetworkClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
for compartment_name in export_compartments:
clusterList = []
diff --git a/cd3_automation_toolkit/DeveloperServices/OKE/templates/cluster-template b/cd3_automation_toolkit/DeveloperServices/OKE/templates/cluster-template
index 8e5f56cf5..9ce2fbc2a 100644
--- a/cd3_automation_toolkit/DeveloperServices/OKE/templates/cluster-template
+++ b/cd3_automation_toolkit/DeveloperServices/OKE/templates/cluster-template
@@ -46,6 +46,10 @@ clusters = {
services_cidr = "{{ service_cidr_block }}"
{% endif %}
service_lb_subnet_ids = {{ oke_lb_subnets }}
+ {% if cluster_kms_key_id %}
+ cluster_kms_key_id = "{{ cluster_kms_key_id }}"
+ {% endif %}
+
{# ##Do not modify below this line## #}
{# #}
{# ##Do not modify below this line## #}
diff --git a/cd3_automation_toolkit/DeveloperServices/OKE/templates/nodepool-template b/cd3_automation_toolkit/DeveloperServices/OKE/templates/nodepool-template
index d033b9fb8..d76cf6ef6 100644
--- a/cd3_automation_toolkit/DeveloperServices/OKE/templates/nodepool-template
+++ b/cd3_automation_toolkit/DeveloperServices/OKE/templates/nodepool-template
@@ -74,6 +74,9 @@ nodepools = {
{% if nodepool_nsgs %}
worker_nsg_ids = [{{ nodepool_nsgs }}]
{% endif %}
+ {% if nodepool_kms_key_id %}
+ nodepool_kms_key_id = "{{ nodepool_kms_key_id }}"
+ {% endif %}
{# ##Do not modify below this line## #}
{# #}
{# ###Section for adding Defined and Freeform Tags### #}
diff --git a/cd3_automation_toolkit/DeveloperServices/ResourceManager/create_resource_manager_stack.py b/cd3_automation_toolkit/DeveloperServices/ResourceManager/create_resource_manager_stack.py
index 2bd7fd91a..d1efce20c 100644
--- a/cd3_automation_toolkit/DeveloperServices/ResourceManager/create_resource_manager_stack.py
+++ b/cd3_automation_toolkit/DeveloperServices/ResourceManager/create_resource_manager_stack.py
@@ -31,9 +31,9 @@ def create_rm(service_rm_name, comp_id,ocs_stack,svcs):
stackdetails = CreateStackDetails()
zipConfigSource = CreateZipUploadConfigSourceDetails()
if svcs == []:
- stackdetails.description = "Created by Automation Tool Kit"
+ stackdetails.description = "Created by Automation ToolKit"
else:
- stackdetails.description = "Created by Automation Tool Kit for services - "+ ','.join(svcs)
+ stackdetails.description = "Created by Automation ToolKit for services - "+ ','.join(svcs)
stackdetails.terraform_version = "1.0.x"
stackdetails.compartment_id = comp_id
stackdetails.display_name = service_rm_name
@@ -65,9 +65,9 @@ def update_rm(service_rm_name,service_rm_ocid,ocs_stack,svcs):
updatestackdetails.config_source = zipConfigSource
updatestackdetails.terraform_version = "1.0.x"
if svcs == []:
- updatestackdetails.description = "Updated by Automation Tool Kit"
+ updatestackdetails.description = "Updated by Automation ToolKit"
else:
- updatestackdetails.description = "Updated by Automation Tool Kit for services - "+ ','.join(svcs)
+ updatestackdetails.description = "Updated by Automation ToolKit for services - "+ ','.join(svcs)
mstack = ocs_stack.update_stack(stack_id=service_rm_ocid, update_stack_details=updatestackdetails)
stack_ocid = mstack.data.id
@@ -75,7 +75,7 @@ def update_rm(service_rm_name,service_rm_ocid,ocs_stack,svcs):
return stack_ocid
# Execution of the code begins here
-def create_resource_manager(outdir,var_file, outdir_struct,prefix,regions, config=DEFAULT_LOCATION):
+def create_resource_manager(outdir,var_file, outdir_struct,prefix,auth_mechanism, config_file, ct, regions):
# Get list of services for one directory
dir_svc_map = {}
@@ -87,11 +87,8 @@ def create_resource_manager(outdir,var_file, outdir_struct,prefix,regions, confi
print("Fetching Compartment Detail. Please wait...")
- configFileName = config
- config = oci.config.from_file(file_location=configFileName)
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
+ config, signer = ct.authenticate(auth_mechanism, config_file)
#ct.get_network_compartment_ids(config['tenancy'],"root",configFileName)
ct.get_compartment_map(var_file,'RM')
print("Proceeding further...")
@@ -105,6 +102,16 @@ def create_resource_manager(outdir,var_file, outdir_struct,prefix,regions, confi
region_dir=outdir + "/" + region
+ for path, subdirs, files in os.walk(region_dir):
+ for name in files:
+ filep = os.path.join(path, name)
+ if 'backend.tf' in filep:
+ f_b=open(filep,"r")
+ f_d=f_b.read()
+ if 'This line will be removed when using remote state' not in f_d:
+ print("Toolkit has been configured to use remote state. OCI Resource Manager does not support that. Exiting!")
+ exit(1)
+
if region == 'global':
outdir_struct = {'rpc':'rpc'}
else:
@@ -184,10 +191,15 @@ def create_resource_manager(outdir,var_file, outdir_struct,prefix,regions, confi
except FileNotFoundError as e:
pass
+ if ct.orm_comp_filter == "null":
+ comp_name = None
+ else:
+ comp_name = ct.orm_comp_filter if ct.orm_comp_filter else input(
+ "Enter Resource Manager Compartment Name : ")
+
#3. Read existing rm_ocids.csv file and get the data in map;
for region in regions:
rm_ocids_file = outdir+'/'+region+'/rm_ocids.csv'
- comp_name = ''
if os.path.exists(rm_ocids_file):
with open(rm_ocids_file) as csv_file:
csv_reader = csv.reader(csv_file, delimiter=';')
@@ -203,20 +215,23 @@ def create_resource_manager(outdir,var_file, outdir_struct,prefix,regions, confi
#put comp name of last stack in the variable
comp_name = rm_comp_name
else:
- comp_name = input("Enter Resource Manager Compartment Name for "+region +" region: ")
+ comp_name = comp_name
+
+ #comp_name= input("Enter Resource Manager Compartment Name for "+region +" region: ")
try:
comp_id = ct.ntk_compartment_ids[comp_name]
except KeyError as e:
- print("Compartment Name "+comp_name +" does not exist in OCI. Please Try Again")
+ #print("Compartment Name "+comp_name +" does not exist in OCI. Please Try Again")
if os.path.exists(rm_ocids_file):
print("Removing rm_ocids.csv file for region "+region)
os.remove(rm_ocids_file)
- comp_name = input("Enter a new Compartment Name for Resource Manager for "+region +" region: ")
- try:
- comp_id = ct.ntk_compartment_ids[comp_name]
- except Exception as e:
- print("Invalid Compartment Name. Please Try again. Exiting...")
+ #comp_name = input("Enter a new Compartment Name for Resource Manager for "+region +" region: ")
+ #try:
+ # comp_id = ct.ntk_compartment_ids[comp_name]
+ #except Exception as e:
+ print("Invalid Compartment Name. Please Try again. Exiting...")
+ exit(1)
# Start creating stacks
@@ -227,6 +242,7 @@ def create_resource_manager(outdir,var_file, outdir_struct,prefix,regions, confi
save_dir_svc_map = dir_svc_map.copy()
for region in regions:
+
print("\nStart creating Stacks for "+region+ " region...")
region_dir = outdir + "/" + region
if region == 'global':
@@ -247,7 +263,7 @@ def create_resource_manager(outdir,var_file, outdir_struct,prefix,regions, confi
else:
new_config.__setitem__("region", str(ct.region_dict[region]))
- ocs_stack = oci.resource_manager.ResourceManagerClient(new_config)
+ ocs_stack = oci.resource_manager.ResourceManagerClient(config=new_config,signer=signer)
#Process files in region directory - single outdir
if len(outdir_struct.items())==0:
diff --git a/cd3_automation_toolkit/Governance/Billing/create_terraform_budget.py b/cd3_automation_toolkit/Governance/Billing/create_terraform_budget.py
index 807557f44..32904a3d7 100644
--- a/cd3_automation_toolkit/Governance/Billing/create_terraform_budget.py
+++ b/cd3_automation_toolkit/Governance/Billing/create_terraform_budget.py
@@ -19,13 +19,8 @@
# Required Inputs- Config file, prefix AND outdir
######
# Execution of the code begins here
-def create_cis_budget(outdir, service_dir, prefix, amount, threshold,config=DEFAULT_LOCATION):
+def create_cis_budget(outdir, service_dir, prefix, ct, amount, threshold):
- # Declare variables
- configFileName = config
-
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
# Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
diff --git a/cd3_automation_toolkit/Governance/Tagging/create_terraform_tags.py b/cd3_automation_toolkit/Governance/Tagging/create_terraform_tags.py
index 5bbc7b44f..0c7eea34f 100644
--- a/cd3_automation_toolkit/Governance/Tagging/create_terraform_tags.py
+++ b/cd3_automation_toolkit/Governance/Tagging/create_terraform_tags.py
@@ -21,12 +21,8 @@
# Required Inputs-CD3 excel file, Config file AND outdir
######
# Execution of the code begins here
-def create_terraform_tags(inputfile, outdir, service_dir, prefix, config):
+def create_terraform_tags(inputfile, outdir, service_dir, prefix, ct):
filename = inputfile
- configFileName = config
-
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
sheetName = "Tags"
# Load the template file
@@ -211,11 +207,11 @@ def create_terraform_tags(inputfile, outdir, service_dir, prefix, config):
if '$' not in str(default_value):
if str(default_value) not in values_list and str(default_value) != "nan" and str(default_value) != "":
print("\nERROR!! Value - "+str(default_value)+" in Default Tag Value is not present in Column Validator...Exiting!")
- exit()
+ exit(1)
else:
if '$'+str(default_value) not in values_list:
print("\nERROR!! Value - "+str(default_value)+" in Default Tag Value is not present in Column Validator...Exiting!")
- exit()
+ exit(1)
if default_value != "" and str(default_value).lower() != "nan":
if '$' in default_value and default_value.count('$') == 1:
diff --git a/cd3_automation_toolkit/Governance/Tagging/export_tags_nonGreenField.py b/cd3_automation_toolkit/Governance/Tagging/export_tags_nonGreenField.py
index 8330d5f3b..f8b49d4f1 100644
--- a/cd3_automation_toolkit/Governance/Tagging/export_tags_nonGreenField.py
+++ b/cd3_automation_toolkit/Governance/Tagging/export_tags_nonGreenField.py
@@ -95,18 +95,15 @@ def print_tags(values_for_column_tags,region, ntk_compartment_name, tag, tag_ke
importCommands[region].write("\nterraform import \"module.tag-defaults[\\\""+ tf_name_namespace+'_' +tf_name_key + '_' +commonTools.check_tf_variable(value.split("=")[0]).strip()+ '-default'+ '\\\"].oci_identity_tag_default.tag_default\" ' + str(defaultcomp_to_tagid_map[tf_name_key+"-"+commonTools.check_tf_variable(value.split("=")[0])]))
# Execution of the code begins here
-def export_tags_nongreenfield(inputfile, outdir, service_dir, _config, export_compartments,ct):
+def export_tags_nongreenfield(inputfile, outdir, service_dir, config, signer, ct, export_compartments):
global tf_import_cmd
global values_for_column_tags
global sheet_dict_tags
global importCommands
- global config
global tag_default_comps_map
global defaultcomp_to_tagid_map
cd3file = inputfile
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
if ('.xls' not in cd3file):
print("\nAcceptable cd3 format: .xlsx")
@@ -114,10 +111,6 @@ def export_tags_nongreenfield(inputfile, outdir, service_dir, _config, export_co
# Read CD3
df, values_for_column_tags = commonTools.read_cd3(cd3file, "Tags")
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'],"root",configFileName)
tag_default_comps_map = {}
tag_name_id_map = {}
@@ -142,7 +135,7 @@ def export_tags_nongreenfield(inputfile, outdir, service_dir, _config, export_co
print("\nFetching Tags...")
importCommands[ct.home_region].write("\n\n######### Writing import for Tags #########\n\n")
config.__setitem__("region", ct.region_dict[ct.home_region])
- identity = IdentityClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ identity = IdentityClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
region = ct.home_region.lower()
comp_ocid_done = []
diff --git a/cd3_automation_toolkit/Identity/Compartments/create_terraform_compartments.py b/cd3_automation_toolkit/Identity/Compartments/create_terraform_compartments.py
index 364988560..dc9fd35e2 100644
--- a/cd3_automation_toolkit/Identity/Compartments/create_terraform_compartments.py
+++ b/cd3_automation_toolkit/Identity/Compartments/create_terraform_compartments.py
@@ -20,10 +20,9 @@
# Required Inputs-CD3 excel file, Config file, prefix AND outdir
######
# Execution of the code begins here
-def create_terraform_compartments(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_compartments(inputfile, outdir, service_dir, prefix, ct):
# Declare variables
filename = inputfile
- configFileName = config
sheetName = 'Compartments'
auto_tfvars_filename = '_'+sheetName.lower()+'.auto.tfvars'
@@ -46,10 +45,6 @@ def create_terraform_compartments(inputfile, outdir, service_dir, prefix, config
# List of the column headers
dfcolumns = df.columns.values.tolist()
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- #config = oci.config.from_file(file_location=configFileName)
- #ct.get_network_compartment_ids(config['tenancy'], "root", configFileName)
home_region = ct.home_region
srcdir = outdir + "/" + home_region + "/" + service_dir + "/"
@@ -110,7 +105,7 @@ def travel(parent, keys, values, c):
# Check if values are entered for mandatory fields
if str(df.loc[i, 'Region']).lower() == 'nan' or str(df.loc[i, 'Name']).lower() == 'nan':
print("\nThe values for Region and Name cannot be left empty. Please enter a value and try again !!")
- exit()
+ exit(1)
var_c_name = ""
nf=0
diff --git a/cd3_automation_toolkit/Identity/Groups/create_terraform_groups.py b/cd3_automation_toolkit/Identity/Groups/create_terraform_groups.py
index 7aa1ea4fa..48822f13b 100644
--- a/cd3_automation_toolkit/Identity/Groups/create_terraform_groups.py
+++ b/cd3_automation_toolkit/Identity/Groups/create_terraform_groups.py
@@ -18,15 +18,12 @@
# Required Inputs- CD3 excel file, Config file, prefix AND outdir
######
# Execution of the code begins here
-def create_terraform_groups(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_groups(inputfile, outdir, service_dir, prefix, ct):
# Read the arguments
filename = inputfile
- configFileName = config
sheetName = 'Groups'
auto_tfvars_filename = '_' + sheetName.lower() + '.auto.tfvars'
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
outfile = {}
oname = {}
@@ -77,7 +74,7 @@ def create_terraform_groups(inputfile, outdir, service_dir, prefix, config=DEFAU
# Check if values are entered for mandatory fields
if str(df.loc[i, 'Region']).lower() == 'nan' or str(df.loc[i, 'Name']).lower() == 'nan' :
print("\nThe values for Region and Name cannot be left empty. Please enter a value and try again !!")
- exit()
+ exit(1)
for columnname in dfcolumns:
diff --git a/cd3_automation_toolkit/Identity/NetworkSources/create_terraform_networkSources.py b/cd3_automation_toolkit/Identity/NetworkSources/create_terraform_networkSources.py
index 2f8f397da..cc6113a60 100644
--- a/cd3_automation_toolkit/Identity/NetworkSources/create_terraform_networkSources.py
+++ b/cd3_automation_toolkit/Identity/NetworkSources/create_terraform_networkSources.py
@@ -17,15 +17,12 @@
# Required Inputs- CD3 excel file, Config file, prefix AND outdir
######
# Execution of the code begins here
-def create_terraform_networkSources(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_networkSources(inputfile, outdir, service_dir, prefix, ct):
# Read the arguments
filename = inputfile
- configFileName = config
sheetName = 'NetworkSources'
auto_tfvars_filename = '_' + sheetName.lower() + '.auto.tfvars'
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
outfile = {}
oname = {}
@@ -75,7 +72,7 @@ def create_terraform_networkSources(inputfile, outdir, service_dir, prefix, conf
# Check if values are entered for mandatory fields
if str(df.loc[i, 'Region']).lower() == 'nan' or str(df.loc[i, 'Name']).lower() == 'nan' or str(df.loc[i, 'Description']).lower() == 'nan' :
print("\nThe values for Region, Name and Description cannot be left empty. Please enter a value and try again !!")
- exit()
+ exit(1)
for columnname in dfcolumns:
diff --git a/cd3_automation_toolkit/Identity/NetworkSources/export_networkSources_nonGreenField.py b/cd3_automation_toolkit/Identity/NetworkSources/export_networkSources_nonGreenField.py
index f7a94f93b..3f020fc2a 100644
--- a/cd3_automation_toolkit/Identity/NetworkSources/export_networkSources_nonGreenField.py
+++ b/cd3_automation_toolkit/Identity/NetworkSources/export_networkSources_nonGreenField.py
@@ -16,9 +16,8 @@
from commonTools import *
# Execution of the code begins here
-def export_networkSources(inputfile, outdir, service_dir, _config, ct):
+def export_networkSources(inputfile, outdir, service_dir, config, signer, ct):
global values_for_column_networkSources
- global config
global cd3file
cd3file = inputfile
@@ -27,10 +26,6 @@ def export_networkSources(inputfile, outdir, service_dir, _config, ct):
print("\nAcceptable cd3 format: .xlsx")
exit()
-
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
-
importCommands={}
sheetName = "NetworkSources"
@@ -53,8 +48,7 @@ def export_networkSources(inputfile, outdir, service_dir, _config, ct):
importCommands[ct.home_region].write("terraform init")
config.__setitem__("region", ct.region_dict[ct.home_region])
- idc=IdentityClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- network = oci.core.VirtualNetworkClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ idc=IdentityClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
# Fetch Network Sources
print("\nFetching Network Sources...")
diff --git a/cd3_automation_toolkit/Identity/Policies/create_terraform_policies.py b/cd3_automation_toolkit/Identity/Policies/create_terraform_policies.py
index 2a34d886a..44acdaae9 100644
--- a/cd3_automation_toolkit/Identity/Policies/create_terraform_policies.py
+++ b/cd3_automation_toolkit/Identity/Policies/create_terraform_policies.py
@@ -20,14 +20,11 @@
# Required Inputs- CD3 excel file, Config file, prefix AND outdir
######
# Execution of the code begins here
-def create_terraform_policies(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_policies(inputfile, outdir, service_dir, prefix, ct):
# Declare variables
filename = inputfile
- configFileName = config
sheetName = 'Policies'
auto_tfvars_filename = '_' + sheetName.lower() + '.auto.tfvars'
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
outfile = {}
oname = {}
diff --git a/cd3_automation_toolkit/Identity/Users/create_terraform_users.py b/cd3_automation_toolkit/Identity/Users/create_terraform_users.py
index d5b329603..9e98e79e6 100644
--- a/cd3_automation_toolkit/Identity/Users/create_terraform_users.py
+++ b/cd3_automation_toolkit/Identity/Users/create_terraform_users.py
@@ -17,15 +17,12 @@
# Required Inputs- CD3 excel file, Config file, prefix AND outdir
######
# Execution of the code begins here
-def create_terraform_users(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_users(inputfile, outdir, service_dir, prefix, ct):
# Read the arguments
filename = inputfile
- configFileName = config
sheetName = 'Users'
auto_tfvars_filename = '_' + sheetName.lower() + '.auto.tfvars'
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
outfile = {}
oname = {}
@@ -75,7 +72,7 @@ def create_terraform_users(inputfile, outdir, service_dir, prefix, config=DEFAUL
# Check if values are entered for mandatory fields
if str(df.loc[i, 'Region']).lower() == 'nan' or str(df.loc[i, 'User Name']).lower() == 'nan' :
print("\nThe values for Region and Name cannot be left empty. Please enter a value and try again !!")
- exit()
+ exit(1)
for columnname in dfcolumns:
diff --git a/cd3_automation_toolkit/Identity/Users/export_users_nonGreenField.py b/cd3_automation_toolkit/Identity/Users/export_users_nonGreenField.py
index fe1765ac7..c654589af 100644
--- a/cd3_automation_toolkit/Identity/Users/export_users_nonGreenField.py
+++ b/cd3_automation_toolkit/Identity/Users/export_users_nonGreenField.py
@@ -16,14 +16,13 @@
from commonTools import *
# Execution of the code begins here
-def export_users(inputfile, outdir, service_dir, _config, ct):
+def export_users(inputfile, outdir, service_dir, config, signer, ct):
global values_for_column_comps
global values_for_column_groups
global values_for_column_policies
global sheet_dict_comps
global sheet_dict_groups
global sheet_dict_policies
- global config
global cd3file
cd3file = inputfile
@@ -33,8 +32,6 @@ def export_users(inputfile, outdir, service_dir, _config, ct):
exit()
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
importCommands={}
sheetName_users = "Users"
@@ -58,7 +55,7 @@ def export_users(inputfile, outdir, service_dir, _config, ct):
importCommands[ct.home_region].write("terraform init")
config.__setitem__("region", ct.region_dict[ct.home_region])
- idc=IdentityClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ idc=IdentityClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
#retrieve group information..this is required to get group name for user-groupmembership
groups = oci.pagination.list_call_get_all_results(idc.list_groups, compartment_id=config['tenancy'])
diff --git a/cd3_automation_toolkit/Identity/export_identity_nonGreenField.py b/cd3_automation_toolkit/Identity/export_identity_nonGreenField.py
index b3bb041bb..f263639ee 100644
--- a/cd3_automation_toolkit/Identity/export_identity_nonGreenField.py
+++ b/cd3_automation_toolkit/Identity/export_identity_nonGreenField.py
@@ -18,14 +18,13 @@
from commonTools import *
# Execution of the code begins here
-def export_identity(inputfile, outdir, service_dir, _config, ct, export_compartments=[]):
+def export_identity(inputfile, outdir, service_dir, config, signer, ct, export_compartments=[]):
global values_for_column_comps
global values_for_column_groups
global values_for_column_policies
global sheet_dict_comps
global sheet_dict_groups
global sheet_dict_policies
- global config
global cd3file
cd3file = inputfile
@@ -34,9 +33,6 @@ def export_identity(inputfile, outdir, service_dir, _config, ct, export_compartm
print("\nAcceptable cd3 format: .xlsx")
exit()
-
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
importCommands={}
sheetName_comps = "Compartments"
@@ -68,7 +64,7 @@ def export_identity(inputfile, outdir, service_dir, _config, ct, export_compartm
importCommands[ct.home_region].write("terraform init")
config.__setitem__("region", ct.region_dict[ct.home_region])
- idc=IdentityClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ idc=IdentityClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
#Fetch Compartments
print("\nFetching Compartments...")
diff --git a/cd3_automation_toolkit/ManagementServices/EventsAndNotifications/create_terraform_events.py b/cd3_automation_toolkit/ManagementServices/EventsAndNotifications/create_terraform_events.py
index 2c596182f..46256aeea 100644
--- a/cd3_automation_toolkit/ManagementServices/EventsAndNotifications/create_terraform_events.py
+++ b/cd3_automation_toolkit/ManagementServices/EventsAndNotifications/create_terraform_events.py
@@ -27,15 +27,12 @@ def extend_event(service_name, resources, listeventid):
# Execution of the code begins here
-def create_terraform_events(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_events(inputfile, outdir, service_dir, prefix, ct):
filename = inputfile
- configFileName = config
sheetName = "Events"
auto_tfvars_filename = '_' + sheetName.lower() + '.auto.tfvars'
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
tempStr={}
event_data = ""
Events_names={}
@@ -90,7 +87,7 @@ def create_terraform_events(inputfile, outdir, service_dir, prefix, config=DEFAU
# Check if values are entered for mandatory fields
if str(df.loc[i, 'Region']).lower() == 'nan' or str(df.loc[i, 'Compartment Name']).lower() == 'nan' or str(df.loc[i, 'Event Name']).lower() == 'nan' or str(df.loc[i, 'Action Type']).lower() == 'nan' or str(df.loc[i, 'Action is Enabled']).lower() == 'nan' or str(df.loc[i, 'Service Name']).lower() == 'nan' or str(df.loc[i, 'Resource']).lower() == 'nan' or str(df.loc[i, 'Event is Enabled']).lower() == 'nan'or str(df.loc[i, 'Topic']).lower() == 'nan' :
print("\nThe values for Region, Compartment, Topic, Protocol and Endpoint cannot be left empty. Please enter a value and try again !!")
- exit()
+ exit(1)
for columnname in dfcolumns:
# Column value
diff --git a/cd3_automation_toolkit/ManagementServices/EventsAndNotifications/create_terraform_notifications.py b/cd3_automation_toolkit/ManagementServices/EventsAndNotifications/create_terraform_notifications.py
index 0f7047f6b..17983d1e0 100644
--- a/cd3_automation_toolkit/ManagementServices/EventsAndNotifications/create_terraform_notifications.py
+++ b/cd3_automation_toolkit/ManagementServices/EventsAndNotifications/create_terraform_notifications.py
@@ -20,16 +20,13 @@
######
# Execution of the code begins here
-def create_terraform_notifications(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_notifications(inputfile, outdir, service_dir, prefix, ct):
filename = inputfile
outdir = outdir
sheetName="Notifications"
topics_auto_tfvars_filename = '_' + sheetName.lower() + '-topics.auto.tfvars'
subs_auto_tfvars_filename = '_' + sheetName.lower() + '-subscriptions.auto.tfvars'
- configFileName = config
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
x = datetime.datetime.now()
date = x.strftime("%f").strip()
tempStr={}
@@ -87,7 +84,7 @@ def create_terraform_notifications(inputfile, outdir, service_dir, prefix, confi
# Check if values are entered for mandatory fields
if str(df.loc[i, 'Region']).lower() == 'nan' or str(df.loc[i, 'Compartment Name']).lower() == 'nan' or str(df.loc[i, 'Topic']).lower() == 'nan' :
print("\nThe values for Region, Compartment, Topic cannot be left empty. Please enter a value and try again !!")
- exit()
+ exit(1)
for columnname in dfcolumns:
# Column value
columnvalue = str(df[columnname][i])
diff --git a/cd3_automation_toolkit/ManagementServices/EventsAndNotifications/export_events_notifications_nonGreenField.py b/cd3_automation_toolkit/ManagementServices/EventsAndNotifications/export_events_notifications_nonGreenField.py
index d9b1135d7..f87da1221 100644
--- a/cd3_automation_toolkit/ManagementServices/EventsAndNotifications/export_events_notifications_nonGreenField.py
+++ b/cd3_automation_toolkit/ManagementServices/EventsAndNotifications/export_events_notifications_nonGreenField.py
@@ -155,7 +155,7 @@ def events_rows(values_for_column_events, region, ntk_compartment_name, event_na
values_for_column_events = commonTools.export_extra_columns(oci_objs, col_header, sheet_dict_events,values_for_column_events)
# Execution for Events export starts here
-def export_events(inputfile, outdir, service_dir, ct,export_compartments=[], export_regions=[], _config=DEFAULT_LOCATION):
+def export_events(inputfile, outdir, service_dir, config, signer, ct,export_compartments=[], export_regions=[]):
global rows
global tf_import_cmd
global values_for_column_events
@@ -163,13 +163,10 @@ def export_events(inputfile, outdir, service_dir, ct,export_compartments=[], exp
global sheet_dict_events
global sheet_dict_notifications
global importCommands
- global config
sheetName = "Events"
cd3file = inputfile
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
if ('.xls' not in cd3file):
print("\nAcceptable cd3 format: .xlsx")
@@ -178,11 +175,6 @@ def export_events(inputfile, outdir, service_dir, ct,export_compartments=[], exp
# Read CD3
df, values_for_column_events = commonTools.read_cd3(cd3file, sheetName)
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'],"root",configFileName)
-
# Get dict for columns from Excel_Columns
sheet_dict_events = ct.sheet_dict[sheetName]
@@ -208,9 +200,9 @@ def export_events(inputfile, outdir, service_dir, ct,export_compartments=[], exp
importCommands[reg].write("\n\n######### Writing import for Events #########\n\n")
config.__setitem__("region", ct.region_dict[reg])
# comp_ocid_done = []
- ncpc = NotificationControlPlaneClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- fun = FunctionsManagementClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- evt = EventsClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ ncpc = NotificationControlPlaneClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ fun = FunctionsManagementClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ evt = EventsClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
region = reg.capitalize()
for ntk_compartment_name in export_compartments:
evts = oci.pagination.list_call_get_all_results(evt.list_rules, compartment_id=ct.ntk_compartment_ids[
@@ -232,7 +224,7 @@ def export_events(inputfile, outdir, service_dir, ct,export_compartments=[], exp
importCommands[reg].write('\n\nterraform plan\n')
# Execution for Notifications export starts here
-def export_notifications(inputfile, outdir, service_dir, ct, export_compartments=[], _config=DEFAULT_LOCATION,export_regions=[]):
+def export_notifications(inputfile, outdir, service_dir, config, signer, ct, export_compartments=[], export_regions=[]):
global rows
global tf_import_cmd
global values_for_column_events
@@ -240,14 +232,10 @@ def export_notifications(inputfile, outdir, service_dir, ct, export_compartments
global sheet_dict_events
global sheet_dict_notifications
global importCommands
- global config
sheetName = "Notifications"
cd3file = inputfile
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
-
if ('.xls' not in cd3file):
print("\nAcceptable cd3 format: .xlsx")
exit()
@@ -255,11 +243,6 @@ def export_notifications(inputfile, outdir, service_dir, ct, export_compartments
# Read CD3
df, values_for_column_notifications = commonTools.read_cd3(cd3file, sheetName)
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'],"root",configFileName)
-
# Get dict for columns from Excel_Columns
sheet_dict_notifications = ct.sheet_dict[sheetName]
@@ -284,9 +267,9 @@ def export_notifications(inputfile, outdir, service_dir, ct, export_compartments
for reg in export_regions:
importCommands[reg].write("\n\n######### Writing import for Notifications #########\n\n")
config.__setitem__("region", ct.region_dict[reg])
- ncpc = NotificationControlPlaneClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- ndpc = NotificationDataPlaneClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- fun = FunctionsManagementClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ ncpc = NotificationControlPlaneClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ ndpc = NotificationDataPlaneClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ fun = FunctionsManagementClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
region = reg.capitalize()
for ntk_compartment_name in export_compartments:
topics = oci.pagination.list_call_get_all_results(ncpc.list_topics,compartment_id=ct.ntk_compartment_ids[ntk_compartment_name])
diff --git a/cd3_automation_toolkit/ManagementServices/Logging/enable_terraform_logging.py b/cd3_automation_toolkit/ManagementServices/Logging/enable_terraform_logging.py
index a5c4e6539..97ab2011c 100644
--- a/cd3_automation_toolkit/ManagementServices/Logging/enable_terraform_logging.py
+++ b/cd3_automation_toolkit/ManagementServices/Logging/enable_terraform_logging.py
@@ -19,7 +19,7 @@
# Required Inputs- Config file, prefix AND outdir
######
# Execution of the code begins here
-def enable_cis_oss_logging(filename, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def enable_cis_oss_logging(filename, outdir, service_dir, prefix, ct):
# Read cd3 using pandas dataframe
df, col_headers = commonTools.read_cd3(filename, "Buckets")
@@ -129,7 +129,7 @@ def enable_cis_oss_logging(filename, outdir, service_dir, prefix, config=DEFAULT
oname.close()
print(outfile[reg] + " for OSS Bucket Logs has been created for region "+reg)
-def enable_cis_vcnflow_logging(filename, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def enable_cis_vcnflow_logging(filename, outdir, service_dir, prefix, act):
# Read cd3 using pandas dataframe
df, col_headers = commonTools.read_cd3(filename, "SubnetsVLANs")
@@ -250,10 +250,9 @@ def enable_cis_vcnflow_logging(filename, outdir, service_dir, prefix, config=DEF
oname.close()
print(outfile[reg] + " for VCN Flow Logs has been created for region "+reg)
-def enable_load_balancer_logging(filename, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def enable_load_balancer_logging(filename, outdir, service_dir, prefix, ct):
# Declare variables
- configFileName = config
# Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
@@ -271,8 +270,6 @@ def enable_load_balancer_logging(filename, outdir, service_dir, prefix, config=D
# List of the column headers
dfcolumns = df.columns.values.tolist()
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
tfStrLogs = {}
tempStr = {}
diff --git a/cd3_automation_toolkit/ManagementServices/Monitoring/create_terraform_alarms.py b/cd3_automation_toolkit/ManagementServices/Monitoring/create_terraform_alarms.py
index 20bf6c332..66496d6dc 100644
--- a/cd3_automation_toolkit/ManagementServices/Monitoring/create_terraform_alarms.py
+++ b/cd3_automation_toolkit/ManagementServices/Monitoring/create_terraform_alarms.py
@@ -16,14 +16,12 @@
# Execution of the code begins here
-def create_terraform_alarms(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_alarms(inputfile, outdir, service_dir, prefix, ct):
filename = inputfile
- configFileName = config
sheetName = 'Alarms'
auto_tfvars_filename = '_' + sheetName.lower() + '.auto.tfvars'
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
+
x = datetime.datetime.now()
date = x.strftime("%f").strip()
tempStr={}
@@ -75,7 +73,7 @@ def create_terraform_alarms(inputfile, outdir, service_dir, prefix, config=DEFAU
# Check if values are entered for mandatory fields
if str(df.loc[i, 'Region']).lower() == 'nan' or str(df.loc[i, 'Compartment Name']).lower() == 'nan' or str(df.loc[i, 'Alarm Name']).lower() == 'nan' or str(df.loc[i, 'Destination Topic Name']).lower() == 'nan' or str(df.loc[i, 'Is Enabled']).lower() == 'nan' or str(df.loc[i, 'Metric Compartment Name']).lower() == 'nan' or str(df.loc[i, 'Namespace']).lower() == 'nan' or str(df.loc[i, 'Severity']).lower() == 'nan' or str(df.loc[i, 'Query']).lower() == 'nan':
print("\nThe values for Region, Compartment, Alarm Name, Destination Topic Name, Is Enabled, Metric Compartment Name, Namespace, Severity and Query cannot be left empty. Please enter a value and try again !!")
- exit()
+ exit(1)
#metric = str(df.loc[i, 'Metric Name']).strip()
#interval = str(df.loc[i, 'Interval']).strip()
diff --git a/cd3_automation_toolkit/ManagementServices/Monitoring/export_alarms_nonGreenField.py b/cd3_automation_toolkit/ManagementServices/Monitoring/export_alarms_nonGreenField.py
index 661c450d7..e67279073 100644
--- a/cd3_automation_toolkit/ManagementServices/Monitoring/export_alarms_nonGreenField.py
+++ b/cd3_automation_toolkit/ManagementServices/Monitoring/export_alarms_nonGreenField.py
@@ -70,14 +70,12 @@ def print_alarms(region, alarm, ncpclient,values_for_column, ntk_compartment_nam
importCommands[region.lower()].write("\nterraform import \"module.alarms[\\\"" + str(comp_tf_name+"_"+alarm_tf_name) + "\\\"].oci_monitoring_alarm.alarm\" " + str(alarm.id))
# Execution of the code begins here
-def export_alarms(inputfile, _outdir, service_dir, _config, ct, export_compartments=[],export_regions=[]):
+def export_alarms(inputfile, outdir, service_dir, config, signer, ct, export_compartments=[],export_regions=[]):
global tf_import_cmd
global sheet_dict
global importCommands
- global config
global cd3file
global reg
- global outdir
global values_for_column
@@ -87,17 +85,8 @@ def export_alarms(inputfile, _outdir, service_dir, _config, ct, export_compartme
exit()
- outdir = _outdir
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
-
sheetName="Alarms"
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'],"root",configFileName)
-
# Read CD3
df, values_for_column= commonTools.read_cd3(cd3file,sheetName)
@@ -127,8 +116,8 @@ def export_alarms(inputfile, _outdir, service_dir, _config, ct, export_compartme
config.__setitem__("region", ct.region_dict[reg])
region = reg.capitalize()
- mclient = oci.monitoring.MonitoringClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- ncpclient = oci.ons.NotificationControlPlaneClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ mclient = oci.monitoring.MonitoringClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ ncpclient = oci.ons.NotificationControlPlaneClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
for ntk_compartment_name in export_compartments:
alarms = oci.pagination.list_call_get_all_results(mclient.list_alarms,compartment_id=ct.ntk_compartment_ids[ntk_compartment_name], lifecycle_state="ACTIVE")
diff --git a/cd3_automation_toolkit/ManagementServices/ServiceConnectorHub/create_terraform_service_connectors.py b/cd3_automation_toolkit/ManagementServices/ServiceConnectorHub/create_terraform_service_connectors.py
index 546a9ede7..6a8f0b38d 100644
--- a/cd3_automation_toolkit/ManagementServices/ServiceConnectorHub/create_terraform_service_connectors.py
+++ b/cd3_automation_toolkit/ManagementServices/ServiceConnectorHub/create_terraform_service_connectors.py
@@ -16,16 +16,13 @@
from jinja2 import Environment, FileSystemLoader
# Execution of the code begins here
-def create_service_connectors(inputfile, outdir, service_dir, prefix, config):
+def create_service_connectors(inputfile, outdir, service_dir, prefix, ct):
tfStr = {}
filename = inputfile
- configFileName = config
sheetName = "ServiceConnectors"
auto_tfvars_filename = prefix + '_' + sheetName.lower() + '.auto.tfvars'
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
# Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
diff --git a/cd3_automation_toolkit/ManagementServices/ServiceConnectorHub/export_sch_nonGreenField.py b/cd3_automation_toolkit/ManagementServices/ServiceConnectorHub/export_sch_nonGreenField.py
index 58b123687..967480323 100644
--- a/cd3_automation_toolkit/ManagementServices/ServiceConnectorHub/export_sch_nonGreenField.py
+++ b/cd3_automation_toolkit/ManagementServices/ServiceConnectorHub/export_sch_nonGreenField.py
@@ -16,7 +16,7 @@
oci_obj_names = {}
-def get_service_connectors(region, SCH_LIST, sch_client, log_client, la_client, identity_client, stream_client,
+def get_service_connectors(config,region, SCH_LIST, sch_client, log_client, la_client, stream_client,
notification_client, func_client, ct, values_for_column, ntk_compartment_name):
volume_comp = ""
log_source_list = []
@@ -122,10 +122,7 @@ def get_comp_details(comp_data):
if target_kind == "loggingAnalytics":
dest_log_group_id = getattr(target_data, 'log_group_id')
target_log_source_identifier = getattr(target_data, 'log_source_identifier')
- dest_logs_compartment_details = la_client.get_log_analytics_log_group(
- log_analytics_log_group_id=dest_log_group_id, namespace_name=la_client.list_namespaces(
- compartment_id=identity_client.get_user(config["user"]).data.compartment_id).data.items[
- 0].namespace_name)
+ dest_logs_compartment_details = la_client.get_log_analytics_log_group(log_analytics_log_group_id=dest_log_group_id, namespace_name=la_client.list_namespaces(compartment_id=config["tenancy"]).data.items[0].namespace_name)
target_log_group_name = getattr(dest_logs_compartment_details.data, 'display_name')
target_comp_id = getattr(dest_logs_compartment_details.data, 'compartment_id')
target_comp_name = get_comp_details(target_comp_id)
@@ -214,14 +211,12 @@ def get_comp_details(comp_data):
values_for_column)
# Execution of the code begins here
-def export_service_connectors(inputfile, _outdir, service_dir, _config, ct, export_compartments=[],export_regions=[]):
+def export_service_connectors(inputfile, outdir, service_dir, config, signer, ct, export_compartments=[],export_regions=[]):
global tf_import_cmd
global sheet_dict
global importCommands
- global config
global cd3file
global reg
- global outdir
global valuesforcolumn
cd3file = inputfile
@@ -229,16 +224,7 @@ def export_service_connectors(inputfile, _outdir, service_dir, _config, ct, expo
print("\nAcceptable cd3 format: .xlsx")
exit()
- outdir = _outdir
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
-
sheetName = "ServiceConnectors"
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'], "root", configFileName)
-
# Read CD3
df, values_for_column = commonTools.read_cd3(cd3file, sheetName)
@@ -267,21 +253,19 @@ def export_service_connectors(inputfile, _outdir, service_dir, _config, ct, expo
importCommands[reg].write("\n\n######### Writing import for Service Connectors #########\n\n")
config.__setitem__("region", ct.region_dict[reg])
region = reg.capitalize()
- sch_client = oci.sch.ServiceConnectorClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- log_client = oci.logging.LoggingManagementClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- la_client = oci.log_analytics.LogAnalyticsClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- identity_client = oci.identity.IdentityClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- stream_client = oci.streaming.StreamAdminClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- notification_client = oci.ons.NotificationControlPlaneClient(config,
- retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- func_client = oci.functions.FunctionsManagementClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ sch_client = oci.sch.ServiceConnectorClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ log_client = oci.logging.LoggingManagementClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ la_client = oci.log_analytics.LogAnalyticsClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ stream_client = oci.streaming.StreamAdminClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ notification_client = oci.ons.NotificationControlPlaneClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ func_client = oci.functions.FunctionsManagementClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
for ntk_compartment_name in export_compartments:
SCH_LIST = oci.pagination.list_call_get_all_results(sch_client.list_service_connectors,
compartment_id=ct.ntk_compartment_ids[
ntk_compartment_name], lifecycle_state="ACTIVE",
sort_by="timeCreated")
- get_service_connectors(region, SCH_LIST, sch_client, log_client, la_client, identity_client,
+ get_service_connectors(config,region, SCH_LIST, sch_client, log_client, la_client,
stream_client, notification_client, func_client, ct, values_for_column, ntk_compartment_name)
commonTools.write_to_cd3(values_for_column, cd3file, sheetName)
diff --git a/cd3_automation_toolkit/Network/BaseNetwork/create_all_tf_objects.py b/cd3_automation_toolkit/Network/BaseNetwork/create_all_tf_objects.py
index 584ba5efc..247051616 100644
--- a/cd3_automation_toolkit/Network/BaseNetwork/create_all_tf_objects.py
+++ b/cd3_automation_toolkit/Network/BaseNetwork/create_all_tf_objects.py
@@ -20,7 +20,7 @@
from .create_terraform_subnet_vlan import create_terraform_subnet_vlan
# Execution starts here
-def create_all_tf_objects(inputfile, outdir, service_dir,prefix, config, non_gf_tenancy, modify_network=False,network_vlan_in_setupoci="network",network_connectivity_in_setupoci='network'):
+def create_all_tf_objects(inputfile, outdir, service_dir,prefix, ct, non_gf_tenancy, modify_network=False,network_vlan_in_setupoci="network",network_connectivity_in_setupoci='network'):
if not os.path.exists(outdir):
os.makedirs(outdir)
if len(service_dir) != 0:
@@ -28,24 +28,24 @@ def create_all_tf_objects(inputfile, outdir, service_dir,prefix, config, non_gf_
else:
service_dir_network = ""
with section('Process VCNs Tab and DRGs Tab'):
- create_major_objects(inputfile, outdir, service_dir_network, prefix, non_gf_tenancy, config, modify_network)
- create_terraform_defaults(inputfile, outdir, service_dir_network, prefix, non_gf_tenancy, config, modify_network)
+ create_major_objects(inputfile, outdir, service_dir_network, prefix, ct, non_gf_tenancy, modify_network)
+ create_terraform_defaults(inputfile, outdir, service_dir_network, prefix, ct, non_gf_tenancy, modify_network)
with section('Process DHCP Tab'):
- create_terraform_dhcp_options(inputfile, outdir, service_dir_network, prefix, non_gf_tenancy, config, modify_network)
+ create_terraform_dhcp_options(inputfile, outdir, service_dir_network, prefix, ct, non_gf_tenancy, modify_network)
with section('Process DRGs tab for DRG Route Tables and Route Distribution creation'):
- create_terraform_drg_route(inputfile, outdir, service_dir_network, prefix, non_gf_tenancy, config, network_connectivity_in_setupoci, modify_network)
+ create_terraform_drg_route(inputfile, outdir, service_dir_network, prefix, ct, non_gf_tenancy, network_connectivity_in_setupoci, modify_network)
if non_gf_tenancy == False:
with section('Process Subnets tab for Routes creation'):
- create_terraform_route(inputfile, outdir, service_dir_network, prefix, non_gf_tenancy,config,network_vlan_in_setupoci, modify_network)
+ create_terraform_route(inputfile, outdir, service_dir_network, prefix, ct, non_gf_tenancy, network_vlan_in_setupoci, modify_network)
if non_gf_tenancy == False:
with section('Process Subnets for Seclists creation'):
- create_terraform_seclist(inputfile, outdir, service_dir_network, prefix, config, modify_network)
+ create_terraform_seclist(inputfile, outdir, service_dir_network, prefix, ct, modify_network)
with section('Process Subnets for Subnets creation'):
- create_terraform_subnet_vlan(inputfile, outdir, service_dir, prefix, non_gf_tenancy, config, network_vlan_in_setupoci,modify_network)
+ create_terraform_subnet_vlan(inputfile, outdir, service_dir, prefix, ct, non_gf_tenancy, network_vlan_in_setupoci,modify_network)
if non_gf_tenancy == False:
print('\n\nMake sure to export all SecRules, RouteRules and DRG RouteRules to CD3. Use sub-options 3,4,5 under option 3(Network) of Main Menu for the same.')
diff --git a/cd3_automation_toolkit/Network/BaseNetwork/create_major_objects.py b/cd3_automation_toolkit/Network/BaseNetwork/create_major_objects.py
index 7c1d7f0a2..788498768 100644
--- a/cd3_automation_toolkit/Network/BaseNetwork/create_major_objects.py
+++ b/cd3_automation_toolkit/Network/BaseNetwork/create_major_objects.py
@@ -25,13 +25,9 @@
# prefix
######
# Code execution starts here
-def create_major_objects(inputfile, outdir, service_dir, prefix, non_gf_tenancy, config, modify_network=False):
+def create_major_objects(inputfile, outdir, service_dir, prefix, ct, non_gf_tenancy, modify_network=False):
# Declare Variables
filename = inputfile
- configFileName = config
-
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
outfile = {}
oname = {}
@@ -110,13 +106,10 @@ def establishPeering(peering_dict):
f.write(updated_data)
f.close()
- def create_drg_and_attachments(inputfile, outdir, config):
+ def create_drg_and_attachments(inputfile, outdir):
# Declare Variables
filename = inputfile
- configFileName = config
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
drg_attach_skeleton = ''
drgstr_skeleton = ''
@@ -201,10 +194,10 @@ def create_drg_and_attachments(inputfile, outdir, config):
try:
if (vcn_name.lower() != "nan" and vcns.vcns_having_drg[vcn_name,region] != drg):
print("ERROR!!! VCN "+vcn_name +" in column Attached To is not as per DRG Required column of VCNs Tab..Exiting!")
- exit()
+ exit(1)
except KeyError:
print("ERROR!!! VCN "+vcn_name+" in column Attached To is not as per VCN Name column of VCNs Tab..Exiting!")
- exit()
+ exit(1)
# Process Rows
ip=1
@@ -617,7 +610,7 @@ def processVCN(tempStr):
processVCN(tempStr)
- create_drg_and_attachments(inputfile, outdir, config)
+ create_drg_and_attachments(inputfile, outdir)
#Write outfiles
for reg in ct.all_regions:
diff --git a/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_defaults.py b/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_defaults.py
index acf924e1e..49674e763 100644
--- a/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_defaults.py
+++ b/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_defaults.py
@@ -21,18 +21,14 @@
# prefix
######
-def create_default_routetable(inputfile, outdir, service_dir, prefix, non_gf_tenancy, config, modify_network):
+def create_default_routetable(inputfile, outdir, service_dir, prefix, ct, non_gf_tenancy, modify_network):
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
env = Environment(loader=file_loader, keep_trailing_newline=True, trim_blocks=True, lstrip_blocks=True)
filename = inputfile
- configFileName = config
vcnsheetName = "VCNs"
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
-
default_routetable_auto_tfvars_filename = "_default-routetables.auto.tfvars"
# routetable templates
@@ -204,18 +200,14 @@ def generate_route_table_string(region_rt_name, region, routetableStr, tempStr,
oname.close()
print(default_outfile + " for default route tables has been created for region " + reg)
-def create_default_seclist(inputfile, outdir, service_dir, prefix, non_gf_tenancy, config, modify_network):
+def create_default_seclist(inputfile, outdir, service_dir, prefix, ct, non_gf_tenancy, modify_network):
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
env = Environment(loader=file_loader, keep_trailing_newline=True, trim_blocks=True, lstrip_blocks=True)
filename = inputfile
- configFileName = config
vcnsheetName = "VCNs"
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
-
# seclist templates
default_seclist = env.get_template('default-seclist-template')
secrule = env.get_template('sec-rule-template')
@@ -405,7 +397,7 @@ def generate_security_rules(region_seclist_name, processed_seclist, tfStr, regio
print(default_outfile + " for default seclist has been created for region " + reg)
# Code execution starts here
-def create_terraform_defaults(inputfile, outdir, service_dir, prefix, non_gf_tenancy, config, modify_network):
+def create_terraform_defaults(inputfile, outdir, service_dir, prefix, ct, non_gf_tenancy, modify_network):
- create_default_seclist(inputfile, outdir, service_dir, prefix, non_gf_tenancy, config, modify_network)
- create_default_routetable(inputfile, outdir, service_dir, prefix, non_gf_tenancy, config, modify_network)
+ create_default_seclist(inputfile, outdir, service_dir, prefix, ct, non_gf_tenancy, modify_network)
+ create_default_routetable(inputfile, outdir, service_dir, prefix, ct, non_gf_tenancy, modify_network)
diff --git a/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_dhcp_options.py b/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_dhcp_options.py
index eb3943f8c..1c3df843d 100644
--- a/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_dhcp_options.py
+++ b/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_dhcp_options.py
@@ -25,7 +25,7 @@
# Outdir
######
# Execution of the code begins here
-def create_terraform_dhcp_options(inputfile, outdir, service_dir, prefix, non_gf_tenancy, config, modify_network=False):
+def create_terraform_dhcp_options(inputfile, outdir, service_dir, prefix, ct, non_gf_tenancy, modify_network=False):
outfile = {}
deffile = {}
oname = {}
@@ -35,12 +35,8 @@ def create_terraform_dhcp_options(inputfile, outdir, service_dir, prefix, non_gf
defStr = {}
filename = inputfile
- configFileName = config
modify_network = str(modify_network).lower()
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
-
# Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
env = Environment(loader=file_loader, keep_trailing_newline=True, trim_blocks=True, lstrip_blocks=True)
diff --git a/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_nsg.py b/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_nsg.py
index 479b735b6..bd31aff7c 100644
--- a/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_nsg.py
+++ b/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_nsg.py
@@ -123,16 +123,13 @@ def statelessOptional(row, tempStr):
# Execution of the code begins here
-def create_terraform_nsg(inputfile, outdir, service_dir, prefix, non_gf_tenancy,config=DEFAULT_LOCATION):
+def create_terraform_nsg(inputfile, outdir, service_dir, prefix, ct):
# Read the arguments
filename = inputfile
- configFileName = config
sheetName = 'NSGs'
nsg_auto_tfvars_filename = '_' + sheetName.lower() + '.auto.tfvars'
nsg_rules_auto_tfvars_filename = '_nsg-rules.auto.tfvars'
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
outfile = {}
oname = {}
diff --git a/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_route.py b/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_route.py
index 4195c5d0c..37da9bf42 100644
--- a/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_route.py
+++ b/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_route.py
@@ -51,6 +51,7 @@ def merge_or_generate_route_rule(reg, tempStr, modifiedroutetableStr,routetableS
end_rule = "## End Route Rule " + tempStr['region'].lower() + "_" + tempStr['rt_tf_name'] + "_" + tempStr[
'network_entity_id'] + "_" + tempStr['destination']
if start_rule in modifiedroutetableStr[reg]: # If the rule is present in filedata
+
if start_rule not in routetableStr[reg]: # But the rule is not in routetableStr then add it to filedata
if routerule.render(tempStr).strip() != '':
if reg != 'lpg_route_rules':
@@ -83,13 +84,10 @@ def merge_or_generate_route_rule(reg, tempStr, modifiedroutetableStr,routetableS
return data
# Execution of the code begins here for drg routes
-def create_terraform_drg_route(inputfile, outdir, service_dir, prefix, non_gf_tenancy, config,network_connectivity_in_setupoci, modify_network):
+def create_terraform_drg_route(inputfile, outdir, service_dir, prefix, ct, non_gf_tenancy,network_connectivity_in_setupoci, modify_network):
filename = inputfile
- configFileName = config
drgv2 = parseDRGs(filename)
common_rt = []
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
drg_routetablefiles = {}
drg_routedistributionfiles = {}
@@ -384,13 +382,10 @@ def purge(dir, pattern):
# Execution of the code begins here for route creation
-def create_terraform_route(inputfile, outdir, service_dir, prefix, non_gf_tenancy, config, network_vlan_in_setupoci,modify_network):
+def create_terraform_route(inputfile, outdir, service_dir, prefix, ct, non_gf_tenancy, network_vlan_in_setupoci,modify_network):
filename = inputfile
- configFileName = config
- ct = commonTools()
tempSkeleton = {}
- ct.get_subscribedregions(configFileName)
common_rt = []
routetablefiles = {}
tempStr = {}
@@ -463,7 +458,7 @@ def create_terraform_route(inputfile, outdir, service_dir, prefix, non_gf_tenanc
srcStr = "##Add New Route Tables for "+reg.lower()+" here##"
modifiedroutetableStr[reg] = tempSkeleton[reg].replace(srcStr,modifiedroutetableStr[reg]) #+"\n"+srcStr) ----> ToTest, if fails add +"\n"+srcStr
else:
- modifiedroutetableStr[reg] = ''
+ modifiedroutetableStr[reg] = ''
# Get Hub VCN name and create route rules for LPGs as per Section VCN_PEERING
def createLPGRouteRules(peering_dict):
ruleStr = ''
@@ -947,6 +942,7 @@ def processSubnet(tempStr):
if data_ngw != '':
data_ngw = data_ngw + "\n" + ngwStr
routetableStr[region] = routetableStr[region].replace(ngwStr, data_ngw)
+
# IGW Rules
if configure_igw.strip() == 'y' and vcn_igw != 'n':
igwStr = "####ADD_NEW_IGW_RULES " + region_rt_name + " ####"
@@ -1016,7 +1012,7 @@ def processSubnet(tempStr):
continue
# skip Subnet rows while running option Add/Modify/Delete VLANs
- if modify_network and network_vlan_in_setupoci == 'vlan' and subnet_vlan_in_excel.startswith('subnet'):
+ if modify_network and network_vlan_in_setupoci == 'vlan' and subnet_vlan_in_excel.lower().startswith('subnet'):
continue
if (region in commonTools.endNames):
@@ -1044,7 +1040,7 @@ def processSubnet(tempStr):
str(df.loc[i, 'Configure IGW Route(y|n)']).lower() == 'nan' or
str(df.loc[i, 'Configure OnPrem Route(y|n)']).lower() == 'nan' or
str(df.loc[i, 'Configure VCNPeering Route(y|n)']).lower() == 'nan'):
- print("\nERROR!!! Column Values (except DHCP Option Name, Route Table Name, Seclist Name or DNS Label) or Rows cannot be left empty in Subnets sheet in CD3..Exiting!")
+ print("\nERROR!!! Column Values (except DHCP Option Name, Route Table Name, Seclist Name or DNS Label) or Rows cannot be left empty in SubnetsVLANs sheet in CD3..Exiting!")
exit(1)
if (str(df.loc[i,'Subnet or VLAN']).strip().lower()=='subnet'):
if str(df.loc[i, 'Type(private|public)']).lower() == 'nan' or str(df.loc[i, 'Add Default Seclist']).lower() == 'nan':
diff --git a/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_seclist.py b/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_seclist.py
index fa4fb0e27..5ac2b2c55 100644
--- a/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_seclist.py
+++ b/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_seclist.py
@@ -21,7 +21,7 @@
# Required Inputs-CD3 excel file, Config file, Modify Network AND outdir
######
# Execution of the code begins here
-def create_terraform_seclist(inputfile, outdir, service_dir, prefix, config, modify_network=False):
+def create_terraform_seclist(inputfile, outdir, service_dir, prefix,ct, modify_network=False):
def purge(dir, pattern):
for f in os.listdir(dir):
@@ -31,10 +31,7 @@ def purge(dir, pattern):
filename = inputfile
- configFileName = config
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
tempSkeleton = {}
tempSecList = {}
modify_network_seclists = {}
@@ -231,7 +228,7 @@ def processSubnet(tempStr, service_dir):
# Check if values are entered for mandatory fields
if str(df.loc[i, 'Region']).lower() == 'nan' or str(df.loc[i, 'Compartment Name']).lower() == 'nan' or str(df.loc[i,'VCN Name']).lower() == 'nan':
print("\nThe values for Region, Compartment Name and VCN Name cannot be left empty in Subnets Tab. Please enter a value and try again !!")
- exit()
+ exit(1)
for columnname in dfcolumns:
# Column value
diff --git a/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_subnet_vlan.py b/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_subnet_vlan.py
index 889f39b47..8fdbf0b11 100644
--- a/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_subnet_vlan.py
+++ b/cd3_automation_toolkit/Network/BaseNetwork/create_terraform_subnet_vlan.py
@@ -21,13 +21,8 @@
# Required Inputs-CD3 excel file, Config file, prefix AND outdir
######
# Execution of the code begins here
-def create_terraform_subnet_vlan(inputfile, outdir, service_dir, prefix, non_gf_tenancy, config, network_vlan_in_setupoci, modify_network=False):
+def create_terraform_subnet_vlan(inputfile, outdir, service_dir, prefix, ct, non_gf_tenancy, network_vlan_in_setupoci, modify_network=False):
filename = inputfile
- configFileName = config
-
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
-
fname = None
outfile={}
diff --git a/cd3_automation_toolkit/Network/BaseNetwork/exportNSG.py b/cd3_automation_toolkit/Network/BaseNetwork/exportNSG.py
index 98a02237e..11d3b6c3c 100644
--- a/cd3_automation_toolkit/Network/BaseNetwork/exportNSG.py
+++ b/cd3_automation_toolkit/Network/BaseNetwork/exportNSG.py
@@ -158,14 +158,11 @@ def print_nsg(values_for_column_nsgs,region, comp_name, vcn_name, nsg):
importCommands[region.lower()].write("\nterraform import \"module.nsgs[\\\"" + tf_name + "\\\"].oci_core_network_security_group.network_security_group\" " + str(nsg.id))
# Execution of the code begins here
-def export_nsg(inputfile, export_compartments, export_regions, service_dir, _config, _tf_import_cmd, outdir,ct):
+def export_nsg(inputfile, outdir, service_dir,config,signer, ct, export_compartments,export_regions,_tf_import_cmd):
global tf_import_cmd
global values_for_column_nsgs
global sheet_dict_nsgs
global importCommands
- global config
- input_config_file = _config
- config = oci.config.from_file(file_location=input_config_file)
cd3file = inputfile
if '.xls' not in cd3file:
@@ -179,11 +176,6 @@ def export_nsg(inputfile, export_compartments, export_regions, service_dir, _con
# Read CD3
df, values_for_column_nsgs = commonTools.read_cd3(cd3file,"NSGs")
- if ct == None:
- ct = commonTools()
- ct.get_subscribedregions(input_config_file)
- ct.get_network_compartment_ids(config['tenancy'], "root", input_config_file)
-
print("\nFetching NSGs...")
# Get dict for columns from Excel_Columns
@@ -204,7 +196,7 @@ def export_nsg(inputfile, export_compartments, export_regions, service_dir, _con
for reg in export_regions:
config.__setitem__("region", commonTools().region_dict[reg])
- vnc = VirtualNetworkClient(config)
+ vnc = VirtualNetworkClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
region = reg.capitalize()
nsglist = [""]
for ntk_compartment_name in export_compartments:
diff --git a/cd3_automation_toolkit/Network/BaseNetwork/exportRoutetable.py b/cd3_automation_toolkit/Network/BaseNetwork/exportRoutetable.py
index 424b4dd34..feca5b4d1 100644
--- a/cd3_automation_toolkit/Network/BaseNetwork/exportRoutetable.py
+++ b/cd3_automation_toolkit/Network/BaseNetwork/exportRoutetable.py
@@ -8,8 +8,8 @@
from commonTools import *
-def get_network_entity_name(config,network_identity_id):
- vcn1 = VirtualNetworkClient(config)
+def get_network_entity_name(config,signer,network_identity_id):
+ vcn1 = VirtualNetworkClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
if('internetgateway' in network_identity_id):
igw=vcn1.get_internet_gateway(network_identity_id)
network_identity_name = "igw:"+igw.data.display_name
@@ -71,7 +71,7 @@ def insert_values(routetable,values_for_column,region,comp_name,name,routerule):
elif (routerule != None and col_header == 'Route Destination Object'):
network_entity_id = routerule.network_entity_id
- network_entity_name = get_network_entity_name(config, network_entity_id)
+ network_entity_name = get_network_entity_name(config, signer, network_entity_id)
values_for_column[col_header].append(network_entity_name)
if ('internetgateway' in network_entity_id):
if (routerule.destination not in values_for_vcninfo['igw_destinations']):
@@ -102,7 +102,7 @@ def insert_values_drg(routetable,import_drg_route_distribution_name,values_for_c
elif (routerule != None and col_header == 'Next Hop Attachment'):
next_hop_attachment_id=routerule.next_hop_drg_attachment_id
- network_entity_name = get_network_entity_name(config, next_hop_attachment_id)
+ network_entity_name = get_network_entity_name(config, signer, next_hop_attachment_id)
values_for_column_drg[col_header].append(network_entity_name)
else:
@@ -156,13 +156,16 @@ def print_routetables(routetables,region,vcn_name,comp_name):
print(dn + "," +str(rule.destination)+","+desc)
# Execution of the code begins here for drg route table
-def export_drg_routetable(inputfile, export_compartments, export_regions, service_dir, _config, _tf_import_cmd, outdir,ct):
+def export_drg_routetable(inputfile, outdir, service_dir,config1,signer1, ct, export_compartments,export_regions,_tf_import_cmd):
# Read the arguments
global tf_import_cmd_drg
global values_for_column_drg
global sheet_dict_drg
global importCommands_drg
global config
+ config=config1
+ global signer
+ signer=signer1
cd3file = inputfile
if '.xls' not in cd3file:
@@ -176,12 +179,6 @@ def export_drg_routetable(inputfile, export_compartments, export_regions, servic
# Read CD3
df, values_for_column_drg = commonTools.read_cd3(cd3file, "DRGRouteRulesinOCI")
- config = oci.config.from_file(_config)
-
- if ct == None:
- ct = commonTools()
- ct.get_subscribedregions(_config)
- ct.get_network_compartment_ids(config['tenancy'], "root", _config)
# Get dict for columns from Excel_Columns
sheet_dict_drg = ct.sheet_dict["DRGRouteRulesinOCI"]
@@ -196,11 +193,13 @@ def export_drg_routetable(inputfile, export_compartments, export_regions, servic
"tf_import_commands_network_drg_routerules_nonGF.sh")
importCommands_drg[reg] = open(outdir + "/" + reg + "/" + service_dir+ "/tf_import_commands_network_drg_routerules_nonGF.sh", "w")
importCommands_drg[reg].write("#!/bin/bash")
+ importCommands_drg[reg].write("\n")
+ importCommands_drg[reg].write("terraform init")
importCommands_drg[reg].write("\n\n######### Writing import for DRG Route Tables #########\n\n")
for reg in export_regions:
config.__setitem__("region", commonTools().region_dict[reg])
- vcn = VirtualNetworkClient(config, timeout=(30,120))
+ vcn = VirtualNetworkClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer,timeout=(30,120))
region = reg.capitalize()
#comp_ocid_done = []
@@ -247,15 +246,19 @@ def export_drg_routetable(inputfile, export_compartments, export_regions, servic
importCommands_drg[reg].write('\n\nterraform plan\n')
importCommands_drg[reg].close()
+
# Execution of the code begins here for route table export
-def export_routetable(inputfile, export_compartments,export_regions, service_dir, _config, _tf_import_cmd, outdir,ct):
+def export_routetable(inputfile, outdir, service_dir,config1,signer1, ct, export_compartments,export_regions,_tf_import_cmd):
# Read the arguments
global tf_import_cmd
global values_for_column
global sheet_dict
global importCommands
- global config
global values_for_vcninfo
+ global config
+ config=config1
+ global signer
+ signer=signer1
cd3file = inputfile
if '.xls' not in cd3file:
@@ -277,12 +280,6 @@ def export_routetable(inputfile, export_compartments,export_regions, service_dir
# Get dict for columns from Excel_Columns
sheet_dict=ct.sheet_dict["RouteRulesinOCI"]
- config = oci.config.from_file(_config)
-
- if ct == None:
- ct = commonTools()
- ct.get_subscribedregions(_config)
- ct.get_network_compartment_ids(config['tenancy'], "root", _config)
print("\nFetching Route Rules...")
if tf_import_cmd:
@@ -293,11 +290,13 @@ def export_routetable(inputfile, export_compartments,export_regions, service_dir
"tf_import_commands_network_routerules_nonGF.sh")
importCommands[reg] = open(outdir + "/" + reg + "/" + service_dir+ "/tf_import_commands_network_routerules_nonGF.sh", "a")
importCommands[reg].write("#!/bin/bash")
+ importCommands[reg].write("\n")
+ importCommands[reg].write("terraform init")
importCommands[reg].write("\n\n######### Writing import for Route Tables #########\n\n")
for reg in export_regions:
config.__setitem__("region", commonTools().region_dict[reg])
- vcn = VirtualNetworkClient(config)
+ vcn = VirtualNetworkClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
region = reg.capitalize()
#comp_ocid_done = []
@@ -315,5 +314,6 @@ def export_routetable(inputfile, export_compartments,export_regions, service_dir
if tf_import_cmd:
commonTools.write_to_cd3(values_for_vcninfo, cd3file, "VCN Info")
for reg in export_regions:
+ importCommands[reg].write('\n\nterraform plan\n')
importCommands[reg].close()
diff --git a/cd3_automation_toolkit/Network/BaseNetwork/exportSeclist.py b/cd3_automation_toolkit/Network/BaseNetwork/exportSeclist.py
index ba48bf968..8bfb4a905 100644
--- a/cd3_automation_toolkit/Network/BaseNetwork/exportSeclist.py
+++ b/cd3_automation_toolkit/Network/BaseNetwork/exportSeclist.py
@@ -207,12 +207,11 @@ def print_secrules(seclists,region,vcn_name,comp_name):
print(printstr)
# Execution of the code begins here
-def export_seclist(inputfile, export_compartments,export_regions,service_dir, _config, _tf_import_cmd, outdir,ct):
+def export_seclist(inputfile, outdir, service_dir,config,signer, ct, export_compartments,export_regions,_tf_import_cmd):
global tf_import_cmd
global values_for_column
global sheet_dict
global importCommands
- global config
cd3file = inputfile
@@ -226,12 +225,6 @@ def export_seclist(inputfile, export_compartments,export_regions,service_dir, _c
# Read CD3
df, values_for_column = commonTools.read_cd3(cd3file,"SecRulesinOCI")
- config = oci.config.from_file(_config)
-
- if ct == None:
- ct = commonTools()
- ct.get_subscribedregions(_config)
- ct.get_network_compartment_ids(config['tenancy'],"root", _config)
print("\nFetching Security Rules...")
@@ -246,12 +239,14 @@ def export_seclist(inputfile, export_compartments,export_regions,service_dir, _c
"tf_import_commands_network_secrules_nonGF.sh")
importCommands[reg] = open(outdir + "/" + reg + "/" + service_dir+ "/tf_import_commands_network_secrules_nonGF.sh", "w")
importCommands[reg].write("#!/bin/bash")
+ importCommands[reg].write("\n")
+ importCommands[reg].write("terraform init")
importCommands[reg].write("\n\n######### Writing import for Security Lists #########\n\n")
for reg in export_regions:
config.__setitem__("region", commonTools().region_dict[reg])
- vcn = VirtualNetworkClient(config)
+ vcn = VirtualNetworkClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
region = reg.capitalize()
#comp_ocid_done = []
for ntk_compartment_name in export_compartments:
@@ -267,4 +262,5 @@ def export_seclist(inputfile, export_compartments,export_regions,service_dir, _c
print("SecRules exported to CD3\n")
if tf_import_cmd:
for reg in export_regions:
+ importCommands[reg].write('\n\nterraform plan\n')
importCommands[reg].close()
diff --git a/cd3_automation_toolkit/Network/BaseNetwork/export_network_nonGreenField.py b/cd3_automation_toolkit/Network/BaseNetwork/export_network_nonGreenField.py
index 309d35969..2660f7e83 100644
--- a/cd3_automation_toolkit/Network/BaseNetwork/export_network_nonGreenField.py
+++ b/cd3_automation_toolkit/Network/BaseNetwork/export_network_nonGreenField.py
@@ -512,21 +512,15 @@ def get_comp_details(comp_data):
# Close the safe_file post updates
rpc_safe_file["global"].close()
-def export_major_objects(inputfile, outdir, service_dir, _config, ct, export_compartments=[], export_regions=[]):
+def export_major_objects(inputfile, outdir, service_dir, config, signer, ct, export_compartments=[], export_regions=[]):
global sheet_dict_vcns
global sheet_dict_drgv2
- input_config_file = _config
- config = oci.config.from_file(file_location=input_config_file)
-
cd3file = inputfile
if ('.xls' not in cd3file):
print("\nAcceptable cd3 format: .xlsx")
exit()
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
-
# Read CD3
df, values_for_column_vcns = commonTools.read_cd3(cd3file, "VCNs")
df, values_for_column_drgv2 = commonTools.read_cd3(cd3file, "DRGs")
@@ -555,9 +549,7 @@ def export_major_objects(inputfile, outdir, service_dir, _config, ct, export_com
# Create backups
for reg in export_regions:
- if (
- os.path.exists(
- outdir + "/" + reg + "/" + service_dir + "/tf_import_commands_network_major-objects_nonGF.sh")):
+ if (os.path.exists(outdir + "/" + reg + "/" + service_dir + "/tf_import_commands_network_major-objects_nonGF.sh")):
commonTools.backup_file(outdir + "/" + reg + "/" + service_dir, "tf_import_network",
"tf_import_commands_network_major-objects_nonGF.sh")
if (os.path.exists(outdir + "/" + reg + "/" + service_dir + "/obj_names.safe")):
@@ -576,7 +568,7 @@ def export_major_objects(inputfile, outdir, service_dir, _config, ct, export_com
current_region = reg
importCommands[reg].write("\n######### Writing import for DRGs #########\n")
config.__setitem__("region", ct.region_dict[reg])
- vnc = VirtualNetworkClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ vnc = VirtualNetworkClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
region = reg.capitalize()
drg_ocid = []
drg_rt_ocid = []
@@ -683,7 +675,12 @@ def export_major_objects(inputfile, outdir, service_dir, _config, ct, export_com
# RPC
elif attach_type.upper() == "REMOTE_PEERING_CONNECTION" and rpc_execution:
- # Fetch RPC Details
+ #Skip RPCs to other tenancies
+ rpc = vnc.get_remote_peering_connection(attach_id).data
+ if (rpc.lifecycle_state != 'AVAILABLE' or rpc.is_cross_tenancy_peering != 'false'):
+ continue
+
+ # Fetch RPC Details
drg_route_table_id = drg_attachment_info.drg_route_table_id
if (drg_route_table_id is not None):
@@ -717,8 +714,8 @@ def export_major_objects(inputfile, outdir, service_dir, _config, ct, export_com
subs_region_list.remove(current_region)
for new_reg in subs_region_list:
config.__setitem__("region", ct.region_dict[new_reg])
- dest_rpc_dict[new_reg] = oci.core.VirtualNetworkClient(config,
- retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ dest_rpc_dict[new_reg] = oci.core.VirtualNetworkClient(config=config,
+ retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
SOURCE_RPC_LIST = oci.pagination.list_call_get_all_results(
vnc.list_remote_peering_connections,
compartment_id=ct.ntk_compartment_ids[
@@ -780,7 +777,7 @@ def export_major_objects(inputfile, outdir, service_dir, _config, ct, export_com
for reg in export_regions:
importCommands[reg].write("\n######### Writing import for VCNs #########\n")
config.__setitem__("region", ct.region_dict[reg])
- vnc = VirtualNetworkClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ vnc = VirtualNetworkClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
region = reg.capitalize()
comp_ocid_done = []
for ntk_compartment_name in export_compartments:
@@ -883,19 +880,13 @@ def export_major_objects(inputfile, outdir, service_dir, _config, ct, export_com
print("VCNs exported to CD3\n")
for reg in export_regions:
+ importCommands[reg].write('\n\nterraform plan\n')
importCommands[reg].close()
oci_obj_names[reg].close()
-def export_dhcp(inputfile, outdir, service_dir, _config, ct, export_compartments=[], export_regions=[]):
+def export_dhcp(inputfile, outdir, service_dir, config, signer, ct, export_compartments=[], export_regions=[]):
global sheet_dict_dhcp
- input_config_file = _config
- config = oci.config.from_file(file_location=input_config_file)
-
- if ct == None:
- ct = commonTools()
- ct.get_subscribedregions(input_config_file)
- ct.get_network_compartment_ids(config['tenancy'], "root", input_config_file)
cd3file = inputfile
if ('.xls' not in cd3file):
@@ -919,12 +910,13 @@ def export_dhcp(inputfile, outdir, service_dir, _config, ct, export_compartments
"w")
importCommands[reg].write("#!/bin/bash")
importCommands[reg].write("\n")
+ importCommands[reg].write("terraform init")
print("Tab- DHCP would be overwritten during export process!!!")
for reg in export_regions:
importCommands[reg].write("\n\n######### Writing import for DHCP #########\n\n")
config.__setitem__("region", ct.region_dict[reg])
- vnc = VirtualNetworkClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ vnc = VirtualNetworkClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
region = reg.capitalize()
# comp_ocid_done = []
for ntk_compartment_name in export_compartments:
@@ -945,19 +937,14 @@ def export_dhcp(inputfile, outdir, service_dir, _config, ct, export_compartments
dhcp_info)
commonTools.write_to_cd3(values_for_column_dhcp, cd3file, "DHCP")
print("DHCP exported to CD3\n")
+
for reg in export_regions:
+ importCommands[reg].write('\n\nterraform plan\n')
importCommands[reg].close()
-def export_subnets_vlans(inputfile, outdir, service_dir, _config, ct, export_compartments=[], export_regions=[]):
+def export_subnets_vlans(inputfile, outdir, service_dir, config, signer, ct, export_compartments=[], export_regions=[]):
global sheet_dict_subnets_vlans
- input_config_file = _config
- config = oci.config.from_file(file_location=input_config_file)
-
- if ct == None:
- ct = commonTools()
- ct.get_subscribedregions(input_config_file)
- ct.get_network_compartment_ids(config['tenancy'], "root", input_config_file)
cd3file = inputfile
if ('.xls' not in cd3file):
@@ -988,6 +975,7 @@ def export_subnets_vlans(inputfile, outdir, service_dir, _config, ct, export_com
outdir + "/" + reg + "/" + service_dir_network + "/tf_import_commands_network_subnets_nonGF.sh", "w")
importCommands[reg].write("#!/bin/bash")
importCommands[reg].write("\n")
+ importCommands[reg].write("terraform init")
if (os.path.exists(outdir + "/" + reg + "/" + service_dir_vlan + "/tf_import_commands_network_vlans_nonGF.sh")):
commonTools.backup_file(outdir + "/" + reg + "/" + service_dir_vlan, "tf_import_network",
@@ -996,14 +984,14 @@ def export_subnets_vlans(inputfile, outdir, service_dir, _config, ct, export_com
outdir + "/" + reg + "/" + service_dir_vlan + "/tf_import_commands_network_vlans_nonGF.sh", "w")
importCommands_vlan[reg].write("#!/bin/bash")
importCommands_vlan[reg].write("\n")
- importCommands[reg].write("terraform init")
+ importCommands_vlan[reg].write("terraform init")
print("Tab- 'SubnetsVLANs' would be overwritten during export process!!!")
for reg in export_regions:
importCommands[reg].write("\n\n######### Writing import for Subnets #########\n\n")
importCommands_vlan[reg].write("\n\n######### Writing import for VLANs #########\n\n")
config.__setitem__("region", ct.region_dict[reg])
- vnc = VirtualNetworkClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ vnc = VirtualNetworkClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
region = reg.capitalize()
skip_vlans = 0
@@ -1102,9 +1090,7 @@ def export_subnets_vlans(inputfile, outdir, service_dir, _config, ct, export_com
importCommands_vlan[reg].close()
# Execution of the code begins here
-def export_networking(inputfile, outdir, service_dir, _config, ct, export_compartments=[], export_regions=[]):
- input_config_file = _config
- config = oci.config.from_file(file_location=input_config_file)
+def export_networking(inputfile, outdir, service_dir, config, signer, ct, export_compartments=[], export_regions=[]):
print("\nCD3 excel file should not be opened during export process!!!\n")
@@ -1116,31 +1102,20 @@ def export_networking(inputfile, outdir, service_dir, _config, ct, export_compar
service_dir_nsg = ""
# Fetch Major Objects
- export_major_objects(inputfile, export_compartments=export_compartments, export_regions=export_regions,
- service_dir=service_dir_network, _config=input_config_file, outdir=outdir, ct=ct)
+ export_major_objects(inputfile, outdir, service_dir_network, config=config, signer=signer, ct=ct, export_compartments=export_compartments, export_regions=export_regions)
# Fetch DHCP
-
- export_dhcp(inputfile, export_compartments=export_compartments, export_regions=export_regions,
- service_dir=service_dir_network, _config=input_config_file, outdir=outdir, ct=ct)
+ export_dhcp(inputfile, outdir, service_dir_network, config=config, signer=signer, ct=ct, export_compartments=export_compartments, export_regions=export_regions)
# Fetch Subnets and VLANs
- export_subnets_vlans(inputfile, export_compartments=export_compartments, export_regions=export_regions,
- service_dir=service_dir, _config=input_config_file, outdir=outdir, ct=ct)
+ export_subnets_vlans(inputfile, outdir, service_dir, config=config, signer=signer, ct=ct, export_compartments=export_compartments, export_regions=export_regions)
# Fetch RouteRules and SecRules
- export_seclist(inputfile, export_compartments=export_compartments, export_regions=export_regions,
- service_dir=service_dir_network, _config=input_config_file, _tf_import_cmd=True, outdir=outdir,
- ct=ct)
+ export_seclist(inputfile, outdir, service_dir_network, config=config, signer=signer, ct=ct, export_compartments=export_compartments, export_regions=export_regions,_tf_import_cmd=True)
- export_routetable(inputfile, export_compartments=export_compartments, export_regions=export_regions,
- service_dir=service_dir_network, _config=input_config_file, _tf_import_cmd=True, outdir=outdir,
- ct=ct)
+ export_routetable(inputfile, outdir, service_dir_network, config1=config, signer1=signer, ct=ct, export_compartments=export_compartments, export_regions=export_regions, _tf_import_cmd=True)
- export_drg_routetable(inputfile, export_compartments=export_compartments, export_regions=export_regions,
- service_dir=service_dir_network, _config=input_config_file, _tf_import_cmd=True,
- outdir=outdir, ct=ct)
+ export_drg_routetable(inputfile, outdir, service_dir_network, config1=config, signer1=signer, ct=ct, export_compartments=export_compartments, export_regions=export_regions, _tf_import_cmd=True)
# Fetch NSGs
- export_nsg(inputfile, export_compartments=export_compartments, export_regions=export_regions,
- service_dir=service_dir_nsg, _config=input_config_file, _tf_import_cmd=True, outdir=outdir, ct=ct)
+ export_nsg(inputfile, outdir, service_dir_nsg, config=config, signer=signer, ct=ct, export_compartments=export_compartments, export_regions=export_regions, _tf_import_cmd=True)
\ No newline at end of file
diff --git a/cd3_automation_toolkit/Network/BaseNetwork/modify_routerules_tf.py b/cd3_automation_toolkit/Network/BaseNetwork/modify_routerules_tf.py
index ce9b169c7..bb246f5dc 100644
--- a/cd3_automation_toolkit/Network/BaseNetwork/modify_routerules_tf.py
+++ b/cd3_automation_toolkit/Network/BaseNetwork/modify_routerules_tf.py
@@ -22,12 +22,8 @@
# ######
# Execution of the code begins here for drg route table
-def modify_terraform_drg_routerules(inputfile, outdir, service_dir,prefix=None, non_gf_tenancy=False, config=DEFAULT_LOCATION):
+def modify_terraform_drg_routerules(inputfile, outdir, service_dir,prefix, ct, non_gf_tenancy):
filename = inputfile
- configFileName = config
-
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
#Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
@@ -246,12 +242,8 @@ def modify_terraform_drg_routerules(inputfile, outdir, service_dir,prefix=None,
oname_rt.write(tempSkeletonDRGRouteRule[reg])
oname_rt.close()
# Execution of the code begins here for route rule modification
-def modify_terraform_routerules(inputfile, outdir, service_dir,prefix=None, non_gf_tenancy=False, config=DEFAULT_LOCATION):
+def modify_terraform_routerules(inputfile, outdir, service_dir,prefix, ct, non_gf_tenancy):
filename = inputfile
- configFileName = config
-
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
#Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
diff --git a/cd3_automation_toolkit/Network/BaseNetwork/modify_secrules_tf.py b/cd3_automation_toolkit/Network/BaseNetwork/modify_secrules_tf.py
index cc457b7ad..3df47d311 100644
--- a/cd3_automation_toolkit/Network/BaseNetwork/modify_secrules_tf.py
+++ b/cd3_automation_toolkit/Network/BaseNetwork/modify_secrules_tf.py
@@ -18,7 +18,7 @@
sys.path.append(os.getcwd() + "/../../..")
from commonTools import *
# Execution of the code begins here
-def modify_terraform_secrules(inputfile, outdir, service_dir,prefix=None, non_gf_tenancy=False, config=DEFAULT_LOCATION):
+def modify_terraform_secrules(inputfile, outdir, service_dir,prefix, ct, non_gf_tenancy):
# Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
@@ -28,10 +28,6 @@ def modify_terraform_secrules(inputfile, outdir, service_dir,prefix=None, non_gf
seclist = env.get_template('seclist-template')
secrulesfilename = inputfile
- configFileName = config
-
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
seclists_done = {}
default_ruleStr = {}
diff --git a/cd3_automation_toolkit/Network/DNS/create_dns_resolvers.py b/cd3_automation_toolkit/Network/DNS/create_dns_resolvers.py
index 643c83a9c..3f7e70fe0 100644
--- a/cd3_automation_toolkit/Network/DNS/create_dns_resolvers.py
+++ b/cd3_automation_toolkit/Network/DNS/create_dns_resolvers.py
@@ -16,14 +16,11 @@
# Required Inputs- CD3 excel file, Config file, prefix AND outdir
######
# Execution of the code begins here
-def create_terraform_dns_resolvers(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_dns_resolvers(inputfile, outdir, service_dir, prefix, ct):
filename = inputfile
- configFileName = config
sheetName = "DNS-Resolvers"
auto_tfvars_filename = prefix + "_"+sheetName.lower()+".auto.tfvars"
no_strip_columns = ["Display Name"]
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
outfile = {}
oname = {}
@@ -221,7 +218,7 @@ def create_terraform_dns_resolvers(inputfile, outdir, service_dir, prefix, confi
print(
"\nRegion, Compartment Name, VCN Name fields are mandatory. Please enter a value and try again !!")
print("\n** Exiting **")
- exit()
+ exit(1)
# set key for template items
vcn_name = str(df["VCN Name"][i])
diff --git a/cd3_automation_toolkit/Network/DNS/create_dns_rrsets.py b/cd3_automation_toolkit/Network/DNS/create_dns_rrsets.py
index 88a6ec31f..22e5aa554 100644
--- a/cd3_automation_toolkit/Network/DNS/create_dns_rrsets.py
+++ b/cd3_automation_toolkit/Network/DNS/create_dns_rrsets.py
@@ -17,15 +17,11 @@
######
# Execution of the code begins here
-def create_terraform_dns_rrsets(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_dns_rrsets(inputfile, outdir, service_dir, prefix, ct):
filename = inputfile
- configFileName = config
sheetName = "DNS-Views-Zones-Records"
auto_tfvars_filename = prefix + "_"+sheetName.lower()+".auto.tfvars"
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
-
outfile = {}
oname = {}
tfStr = {}
@@ -76,7 +72,7 @@ def create_terraform_dns_rrsets(inputfile, outdir, service_dir, prefix, config=D
print(
"\nRegion, Compartment Name, View Name fields are mandatory. Please enter a value and try again !!")
print("\n** Exiting **")
- exit()
+ exit(1)
# set key for template items
view_name = str(df["View Name"][i]).strip()
diff --git a/cd3_automation_toolkit/Network/DNS/create_dns_views.py b/cd3_automation_toolkit/Network/DNS/create_dns_views.py
index 3c4994c2b..4532df824 100644
--- a/cd3_automation_toolkit/Network/DNS/create_dns_views.py
+++ b/cd3_automation_toolkit/Network/DNS/create_dns_views.py
@@ -15,15 +15,11 @@
# Required Inputs- CD3 excel file, Config file, prefix AND outdir
######
# Execution of the code begins here
-def create_terraform_dns_views(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_dns_views(inputfile, outdir, service_dir, prefix, ct):
filename = inputfile
- configFileName = config
sheetName = "DNS-Views-Zones-Records"
auto_tfvars_filename = prefix + "_"+sheetName.lower()+".auto.tfvars"
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
-
outfile = {}
oname = {}
tfStr = {}
@@ -79,7 +75,7 @@ def create_terraform_dns_views(inputfile, outdir, service_dir, prefix, config=DE
print(
"\nRegion, Compartment Name, View Name fields are mandatory. Please enter a value and try again !!")
print("\n** Exiting **")
- exit()
+ exit(1)
# set key for template items
display_tf_name = str(df["View Name"][i]).strip()
diff --git a/cd3_automation_toolkit/Network/DNS/create_dns_zones.py b/cd3_automation_toolkit/Network/DNS/create_dns_zones.py
index e089f5cb4..54af6b0b7 100644
--- a/cd3_automation_toolkit/Network/DNS/create_dns_zones.py
+++ b/cd3_automation_toolkit/Network/DNS/create_dns_zones.py
@@ -15,15 +15,11 @@
# Required Inputs- CD3 excel file, Config file, prefix AND outdir
######
# Execution of the code begins here
-def create_terraform_dns_zones(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_dns_zones(inputfile, outdir, service_dir, prefix, ct):
filename = inputfile
- configFileName = config
sheetName = "DNS-Views-Zones-Records"
auto_tfvars_filename = prefix + "_"+sheetName.lower()+".auto.tfvars"
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
-
outfile = {}
oname = {}
tfStr = {}
@@ -77,7 +73,7 @@ def create_terraform_dns_zones(inputfile, outdir, service_dir, prefix, config=DE
print(
"\nRegion, Compartment Name, View Name fields are mandatory. Please enter a value and try again !!")
print("\n** Exiting **")
- exit()
+ exit(1)
# set key for template items
view_name = str(df["View Name"][i]).strip()
diff --git a/cd3_automation_toolkit/Network/DNS/export_dns_resolvers.py b/cd3_automation_toolkit/Network/DNS/export_dns_resolvers.py
index 2d21bcb5d..d5d987f3d 100644
--- a/cd3_automation_toolkit/Network/DNS/export_dns_resolvers.py
+++ b/cd3_automation_toolkit/Network/DNS/export_dns_resolvers.py
@@ -134,15 +134,13 @@ def print_resolvers(resolver_tf_name, resolver, values_for_column, **value):
values_for_column = commonTools.export_tags(resolver, col_header, values_for_column)
# Execution of the code begins here
-def export_dns_resolvers(inputfile, _outdir, service_dir, _config, ct, export_compartments=[], export_regions=[]):
+def export_dns_resolvers(inputfile, outdir, service_dir, config, signer, ct, export_compartments=[], export_regions=[]):
global tf_import_cmd
global sheet_dict
global importCommands
- global config
global values_for_vcninfo
global cd3file
global reg
- global outdir
global values_for_column
global serv_dir
@@ -152,15 +150,7 @@ def export_dns_resolvers(inputfile, _outdir, service_dir, _config, ct, export_co
print("\nAcceptable cd3 format: .xlsx")
exit()
- outdir = _outdir
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
-
sheetName = "DNS-Resolvers"
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'], "root", configFileName)
# Read CD3
df, values_for_column = commonTools.read_cd3(cd3file, sheetName)
@@ -189,8 +179,8 @@ def export_dns_resolvers(inputfile, _outdir, service_dir, _config, ct, export_co
importCommands[reg].write("\n\n######### Writing import for DNS Resolvers #########\n\n")
config.__setitem__("region", ct.region_dict[reg])
region = reg.capitalize()
- dns_client = oci.dns.DnsClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- vnc_client = oci.core.VirtualNetworkClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ dns_client = oci.dns.DnsClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ vnc_client = oci.core.VirtualNetworkClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
for ntk_compartment_name in export_compartments:
vcns = oci.pagination.list_call_get_all_results(vnc_client.list_vcns, compartment_id=ct.ntk_compartment_ids[ntk_compartment_name], lifecycle_state="AVAILABLE")
diff --git a/cd3_automation_toolkit/Network/DNS/export_dns_views_zones_records.py b/cd3_automation_toolkit/Network/DNS/export_dns_views_zones_records.py
index 9cb0cafff..bccf7a901 100644
--- a/cd3_automation_toolkit/Network/DNS/export_dns_views_zones_records.py
+++ b/cd3_automation_toolkit/Network/DNS/export_dns_views_zones_records.py
@@ -97,15 +97,13 @@ def print_empty_view(region, ntk_compartment_name, view_data, values_for_column)
values_for_column = commonTools.export_tags(view_data, col_header, values_for_column)
# Execution of the code begins here
-def export_dns_views_zones_rrsets(inputfile, _outdir, service_dir, _config, ct, dns_filter, export_compartments=[], export_regions=[]):
+def export_dns_views_zones_rrsets(inputfile, outdir, service_dir, config, signer, ct, dns_filter, export_compartments=[], export_regions=[]):
global tf_import_cmd
global sheet_dict
global importCommands
- global config
global values_for_vcninfo
global cd3file
global reg
- global outdir
global values_for_column
cd3file = inputfile
@@ -113,19 +111,11 @@ def export_dns_views_zones_rrsets(inputfile, _outdir, service_dir, _config, ct,
print("\nAcceptable cd3 format: .xlsx")
exit()
- outdir = _outdir
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
-
view_default = dns_filter
zone_default = dns_filter
record_default = dns_filter
sheetName = "DNS-Views-Zones-Records"
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'],"root",configFileName)
# Read CD3
df, values_for_column= commonTools.read_cd3(cd3file,sheetName)
@@ -155,7 +145,7 @@ def export_dns_views_zones_rrsets(inputfile, _outdir, service_dir, _config, ct,
importCommands[reg].write("\n\n######### Writing import for DNS Views/Zones/RRsets #########\n\n")
config.__setitem__("region", ct.region_dict[reg])
region = reg.capitalize()
- dns_client = oci.dns.DnsClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ dns_client = oci.dns.DnsClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY, signer=signer)
# Same compartment will be used to export view/zones
for ntk_compartment_name in export_compartments:
views = oci.pagination.list_call_get_all_results(dns_client.list_views, compartment_id=ct.ntk_compartment_ids[ntk_compartment_name], lifecycle_state="ACTIVE")
diff --git a/cd3_automation_toolkit/Network/Global/create_rpc_resources.py b/cd3_automation_toolkit/Network/Global/create_rpc_resources.py
index 41a461a7f..91bc86c6a 100644
--- a/cd3_automation_toolkit/Network/Global/create_rpc_resources.py
+++ b/cd3_automation_toolkit/Network/Global/create_rpc_resources.py
@@ -21,14 +21,12 @@
# Setting current working dir.
owd = os.getcwd()
-def find_subscribed_regions(inputfile, outdir, service_dir, prefix, config):
+def find_subscribed_regions(inputfile, outdir, service_dir, prefix, config,signer,auth_mechanism):
subs_region_list = []
new_subs_region_list = []
subs_region_pairs = []
- ct = commonTools()
- config = oci.config.from_file(file_location=config)
- idc = IdentityClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ idc = IdentityClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
regionsubscriptions = idc.list_region_subscriptions(tenancy_id=config['tenancy'])
for reg in regionsubscriptions.data:
@@ -63,6 +61,17 @@ def find_subscribed_regions(inputfile, outdir, service_dir, prefix, config):
with open("rpc.tf", "w") as fh:
fh.write(output)
+ with open("rpc.tf", "r+") as provider_file:
+ provider_file_data = provider_file.read().rstrip()
+ if auth_mechanism == 'instance_principal':
+ provider_file_data = provider_file_data.replace("provider \"oci\" {", "provider \"oci\" {\nauth = \"InstancePrincipal\"")
+ if auth_mechanism == 'session_token':
+ provider_file_data = provider_file_data.replace("provider \"oci\" {", "provider \"oci\" {\nauth = \"SecurityToken\"\nconfig_file_profile = \"DEFAULT\"")
+
+ f = open("rpc.tf", "w+")
+ f.write(provider_file_data)
+ f.close()
+
# For generating provider config
file_loader_rpc = FileSystemLoader(f'{Path(__file__).parent}/templates/rpc-module')
env_rpc = Environment(loader=file_loader_rpc, keep_trailing_newline=True, trim_blocks=True, lstrip_blocks=True)
@@ -90,22 +99,21 @@ def find_subscribed_regions(inputfile, outdir, service_dir, prefix, config):
# Execution of the code begins here
-def create_rpc_resource(inputfile, outdir, service_dir, prefix, non_gf_tenancy, config, modify_network=False):
+def create_rpc_resource(inputfile, outdir, service_dir, prefix, auth_mechanism, config_file,ct, non_gf_tenancy):
# Call pre-req func
rpc_safe_file = {}
- find_subscribed_regions(inputfile, outdir, service_dir, prefix, config)
+ config, signer = ct.authenticate(auth_mechanism, config_file)
+ find_subscribed_regions(inputfile, outdir, service_dir, prefix, config,signer,auth_mechanism)
+
os.chdir(owd)
tfStr = {}
requester_drg_name = ''
accepter_drg_rt_name = ''
filename = inputfile
- configFileName = config
sheetName = "DRGs"
auto_tfvars_filename = prefix + '_' + "rpcs" + '.auto.tfvars'
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
# Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
diff --git a/cd3_automation_toolkit/Network/Global/templates/rpc-module/rpc-provider-terraform-template b/cd3_automation_toolkit/Network/Global/templates/rpc-module/rpc-provider-terraform-template
index f78bd52e6..fe68c6326 100644
--- a/cd3_automation_toolkit/Network/Global/templates/rpc-module/rpc-provider-terraform-template
+++ b/cd3_automation_toolkit/Network/Global/templates/rpc-module/rpc-provider-terraform-template
@@ -8,7 +8,7 @@
terraform {
required_providers {
oci = {
- source = "hashicorp/oci"
+ source = "oracle/oci"
configuration_aliases = [
{% for region in subscribed_regions %}
{% set region_keys = region.split('-') %}
diff --git a/cd3_automation_toolkit/Network/LoadBalancers/create_backendset_backendservers.py b/cd3_automation_toolkit/Network/LoadBalancers/create_backendset_backendservers.py
index bbaa162b3..49b2c3803 100644
--- a/cd3_automation_toolkit/Network/LoadBalancers/create_backendset_backendservers.py
+++ b/cd3_automation_toolkit/Network/LoadBalancers/create_backendset_backendservers.py
@@ -18,19 +18,16 @@
# Required Inputs-CD3 excel file, Config file AND outdir
######
# Execution of the code begins here
-def create_backendset_backendservers(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_backendset_backendservers(inputfile, outdir, service_dir, prefix, ct):
# Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
env = Environment(loader=file_loader, keep_trailing_newline=True)
beset = env.get_template('backend-set-template')
beserver = env.get_template('backends-template')
filename = inputfile
- configFileName = config
sheetName = "LB-BackendSet-BackendServer"
lb_auto_tfvars_filename = prefix + "_"+sheetName.lower()+".auto.tfvars"
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
beset_str = {}
beserver_str = {}
@@ -74,7 +71,7 @@ def create_backendset_backendservers(inputfile, outdir, service_dir, prefix, con
if region not in ct.all_regions:
print("\nInvalid Region; It should be one of the values mentioned in VCN Info tab...Exiting!!")
- exit()
+ exit(1)
# temporary dictionaries
tempStr= {}
@@ -154,7 +151,7 @@ def create_backendset_backendservers(inputfile, outdir, service_dir, prefix, con
if str(columnvalue).lower() == 'true':
if str(df.loc[i,'Verify Depth']) == '' or str(df.loc[i,'Verify Depth']) == 'nan':
print("\nVerify Depth cannot be left empty when Verify Peer Certificate has a value... Exiting!!!")
- exit()
+ exit(1)
if columnname == 'SSL Protocols':
tls_versions_list = ''
@@ -169,7 +166,7 @@ def create_backendset_backendservers(inputfile, outdir, service_dir, prefix, con
elif columnvalue == '' and str(df.loc[i, 'Cipher Suite Name']) != 'nan':
print("\nSSL Protocols are mandatory when custom CipherSuiteName is provided..... Exiting !!")
- exit()
+ exit(1)
elif columnvalue != '' and str(df.loc[i, 'Cipher Suite Name']) == 'nan':
print("\nNOTE: Cipher Suite Name is not specified for Backend Set -> " + str(df.loc[i, 'Backend Set Name']) + ", default value - 'oci-default-ssl-cipher-suite-v1' will be considered.\n")
diff --git a/cd3_automation_toolkit/Network/LoadBalancers/create_listener.py b/cd3_automation_toolkit/Network/LoadBalancers/create_listener.py
index de8c304a2..e1ad0ff91 100644
--- a/cd3_automation_toolkit/Network/LoadBalancers/create_listener.py
+++ b/cd3_automation_toolkit/Network/LoadBalancers/create_listener.py
@@ -18,20 +18,16 @@
# Required Inputs-CD3 excel file, Config file AND outdir
######
# Execution of the code begins here
-def create_listener(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_listener(inputfile, outdir, service_dir, prefix, ct):
# Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
env = Environment(loader=file_loader, keep_trailing_newline=True)
listener = env.get_template('listener-template')
filename = inputfile
outdir = outdir
- configFileName = config
sheetName = "LB-Listener"
lb_auto_tfvars_filename = prefix + "_"+sheetName.lower()+".auto.tfvars"
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
-
# Read cd3 using pandas dataframe
df, col_headers = commonTools.read_cd3(filename, sheetName)
@@ -72,7 +68,7 @@ def create_listener(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCAT
if region not in ct.all_regions:
print("\nInvalid Region; It should be one of the values mentioned in VCN Info tab...Exiting!!")
- exit()
+ exit(1)
# temporary dictionaries
tempStr= {}
@@ -182,7 +178,7 @@ def create_listener(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCAT
if str(columnvalue).lower() == 'true':
if str(df.loc[i,'Verify Depth']) == '' or str(df.loc[i,'Verify Depth']) == 'nan':
print("\nVerify Depth cannot be left empty when Verify Peer Certificate has a value... Exiting!!!")
- exit()
+ exit(1)
if columnname == 'SSL Protocols':
tls_versions_list = ''
@@ -197,7 +193,7 @@ def create_listener(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCAT
elif columnvalue == '' and str(df.loc[i,'Cipher Suite Name']) != 'nan':
print("\nSSL Protocols are mandatory when custom CipherSuiteName is provided..... Exiting !!")
- exit()
+ exit(1)
elif columnvalue != '' and str(df.loc[i,'Cipher Suite Name']) == 'nan':
print("NOTE: Cipher Suite Name is not specified for Listener -> "+str(df.loc[i,'Listener Name'])+", default value - 'oci-default-ssl-cipher-suite-v1' will be considered.")
diff --git a/cd3_automation_toolkit/Network/LoadBalancers/create_nlb_backendset_backendservers.py b/cd3_automation_toolkit/Network/LoadBalancers/create_nlb_backendset_backendservers.py
index b9ef7b471..8dc601cf5 100644
--- a/cd3_automation_toolkit/Network/LoadBalancers/create_nlb_backendset_backendservers.py
+++ b/cd3_automation_toolkit/Network/LoadBalancers/create_nlb_backendset_backendservers.py
@@ -16,19 +16,16 @@
# Required Inputs-CD3 excel file, Config file AND outdir
######
# Execution of the code begins here
-def create_nlb_backendset_backendservers(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_nlb_backendset_backendservers(inputfile, outdir, service_dir, prefix, ct):
# Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
env = Environment(loader=file_loader, keep_trailing_newline=True)
beset = env.get_template('nlb-backend-set-template')
beserver = env.get_template('nlb-backends-template')
filename = inputfile
- configFileName = config
sheetName = "NLB-BackendSets-BackendServers"
lb_auto_tfvars_filename = prefix + "_"+sheetName.lower()+".auto.tfvars"
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
beset_str = {}
beserver_str = {}
nlb_tf_name = ''
@@ -65,7 +62,7 @@ def create_nlb_backendset_backendservers(inputfile, outdir, service_dir, prefix,
if region != 'nan' and region not in ct.all_regions:
print("\nInvalid Region; It should be one of the values mentioned in VCN Info tab...Exiting!!")
- exit()
+ exit(1)
# temporary dictionaries
tempStr= {}
diff --git a/cd3_automation_toolkit/Network/LoadBalancers/create_path_route_set.py b/cd3_automation_toolkit/Network/LoadBalancers/create_path_route_set.py
index 276823670..c4c07733b 100644
--- a/cd3_automation_toolkit/Network/LoadBalancers/create_path_route_set.py
+++ b/cd3_automation_toolkit/Network/LoadBalancers/create_path_route_set.py
@@ -18,21 +18,16 @@
# Required Inputs-CD3 excel file, Config file AND outdir
######
# Execution of the code begins here
-def create_path_route_set(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_path_route_set(inputfile, outdir, service_dir, prefix, ct):
# Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
env = Environment(loader=file_loader, keep_trailing_newline=True)
prs = env.get_template('path-route-set-template')
pathrouterules = env.get_template('path-route-rules-template')
filename = inputfile
- configFileName = config
sheetName = "LB-PathRouteSet"
lb_auto_tfvars_filename = prefix + "_"+sheetName.lower()+".auto.tfvars"
-
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
-
# Read cd3 using pandas dataframe
df, col_headers = commonTools.read_cd3(filename, sheetName)
diff --git a/cd3_automation_toolkit/Network/LoadBalancers/create_ruleset.py b/cd3_automation_toolkit/Network/LoadBalancers/create_ruleset.py
index 1f9d526d7..98069a765 100644
--- a/cd3_automation_toolkit/Network/LoadBalancers/create_ruleset.py
+++ b/cd3_automation_toolkit/Network/LoadBalancers/create_ruleset.py
@@ -20,7 +20,7 @@
######
# Execution of the code begins here
-def create_ruleset(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_ruleset(inputfile, outdir, service_dir, prefix, ct):
# Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
env = Environment(loader=file_loader, keep_trailing_newline=True)
@@ -32,14 +32,10 @@ def create_ruleset(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATI
uri = env.get_template('uri-redirect-rules-template')
filename = inputfile
- configFileName = config
sheetName = "LB-RuleSet"
lb_auto_tfvars_filename = prefix + "_"+sheetName.lower()+".auto.tfvars"
rs_str = {}
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
-
# Read cd3 using pandas dataframe
df, col_headers = commonTools.read_cd3(filename, sheetName)
@@ -127,7 +123,7 @@ def add_rules(df,rs_str,tempStr,control_access):
if region not in ct.all_regions:
print("\nInvalid Region; It should be one of the values mentioned in VCN Info tab...Exiting!!")
- exit()
+ exit(1)
# temporary dictionaries
tempStr= {}
diff --git a/cd3_automation_toolkit/Network/LoadBalancers/create_terraform_lbr_hostname_certs.py b/cd3_automation_toolkit/Network/LoadBalancers/create_terraform_lbr_hostname_certs.py
index 8593494fa..13a633f5d 100644
--- a/cd3_automation_toolkit/Network/LoadBalancers/create_terraform_lbr_hostname_certs.py
+++ b/cd3_automation_toolkit/Network/LoadBalancers/create_terraform_lbr_hostname_certs.py
@@ -21,7 +21,7 @@
# Required Inputs-CD3 excel file, Config file AND outdir
######
# Execution of the code begins here
-def create_terraform_lbr_hostname_certs(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_lbr_hostname_certs(inputfile, outdir, service_dir, prefix, ct):
# Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
env = Environment(loader=file_loader, keep_trailing_newline=True)
@@ -34,10 +34,6 @@ def create_terraform_lbr_hostname_certs(inputfile, outdir, service_dir, prefix,
lb_auto_tfvars_filename = prefix + "_"+sheetName.lower()+".auto.tfvars"
filename = inputfile
- configFileName = config
-
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
lbr_str = {}
reserved_ips_str = {}
@@ -147,11 +143,11 @@ def certificate_templates(dfcert):
if columnvalue.strip().lower() in oracle_cipher_suites:
print("User-defined cipher suite must not be the same as any of Oracle's predefined or reserved SSL cipher suite names... Exiting!!")
- exit()
+ exit(1)
if str(df.loc[i,'Ciphers']).strip() == '':
print("Ciphers Column cannot be left blank when Cipher Suite Name has a value.....Exiting!!")
- exit()
+ exit(1)
else:
columnvalue = ""
@@ -200,7 +196,7 @@ def certificate_templates(dfcert):
if region != 'nan' and region not in ct.all_regions:
print("\nInvalid Region; It should be one of the regions tenancy is subscribed to...Exiting!!")
- exit()
+ exit(1)
# temporary dictionaries
tempStr= {}
@@ -271,7 +267,7 @@ def certificate_templates(dfcert):
lbr_subnets_list.append(subnets.vcn_subnet_map[key][2])
except Exception as e:
print("Invalid Subnet Name specified for row " + str(i + 3) + ". It Doesnt exist in Subnets sheet. Exiting!!!")
- exit()
+ exit(1)
tempdict = {'network_compartment_tf_name': commonTools.check_tf_variable(network_compartment_id), 'vcn_name': vcn_name,'lbr_subnets': json.dumps(lbr_subnets_list)}
elif len(lbr_subnets) == 2:
for subnet in lbr_subnets:
@@ -285,7 +281,7 @@ def certificate_templates(dfcert):
lbr_subnets_list.append(subnets.vcn_subnet_map[key][2])
except Exception as e:
print("Invalid Subnet Name specified for row " + str(i + 3) + ". It Doesnt exist in Subnets sheet. Exiting!!!")
- exit()
+ exit(1)
tempdict = {'network_compartment_tf_name': commonTools.check_tf_variable(network_compartment_id), 'vcn_name': vcn_name,'lbr_subnets': json.dumps(lbr_subnets_list) }
if columnname == "NSGs":
diff --git a/cd3_automation_toolkit/Network/LoadBalancers/create_terraform_nlb_listener.py b/cd3_automation_toolkit/Network/LoadBalancers/create_terraform_nlb_listener.py
index a3dfdb7aa..d03faf8c6 100644
--- a/cd3_automation_toolkit/Network/LoadBalancers/create_terraform_nlb_listener.py
+++ b/cd3_automation_toolkit/Network/LoadBalancers/create_terraform_nlb_listener.py
@@ -17,7 +17,7 @@
######
# Execution of the code begins here
-def create_terraform_nlb_listener(inputfile, outdir, service_dir, prefix, config=DEFAULT_LOCATION):
+def create_terraform_nlb_listener(inputfile, outdir, service_dir, prefix, ct):
# Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
env = Environment(loader=file_loader, keep_trailing_newline=True)
@@ -29,10 +29,6 @@ def create_terraform_nlb_listener(inputfile, outdir, service_dir, prefix, config
nlb_auto_tfvars_filename = prefix + "_"+sheetName.lower()+".auto.tfvars"
filename = inputfile
- configFileName = config
-
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
nlb_str = {}
reserved_ips_str = {}
@@ -70,7 +66,7 @@ def create_terraform_nlb_listener(inputfile, outdir, service_dir, prefix, config
if region != 'nan' and region not in ct.all_regions:
print("\nInvalid Region; It should be one of the regions tenancy is subscribed to...Exiting!!")
- exit()
+ exit(1)
# Check for empty values
empty_nlb = 0
@@ -152,7 +148,7 @@ def create_terraform_nlb_listener(inputfile, outdir, service_dir, prefix, config
subnet_id = subnets.vcn_subnet_map[key][2]
except Exception as e:
print("Invalid Subnet Name specified for row " + str(i + 3) + ". It Doesnt exist in Subnets sheet. Exiting!!!")
- exit()
+ exit(1)
tempdict = {'network_compartment_tf_name': commonTools.check_tf_variable(network_compartment_id), 'vcn_name': vcn_name,'subnet_id': subnet_id}
if columnname == "NSGs":
diff --git a/cd3_automation_toolkit/Network/LoadBalancers/export_lbr_nonGreenField.py b/cd3_automation_toolkit/Network/LoadBalancers/export_lbr_nonGreenField.py
index 5fcfafc84..77bfd01d5 100644
--- a/cd3_automation_toolkit/Network/LoadBalancers/export_lbr_nonGreenField.py
+++ b/cd3_automation_toolkit/Network/LoadBalancers/export_lbr_nonGreenField.py
@@ -373,7 +373,7 @@ def print_lbr_hostname_certs(region, ct, values_for_column_lhc, lbr, LBRs, lbr_c
return values_for_column_lhc
def print_backendset_backendserver(region, ct, values_for_column_bss, lbr, LBRs, lbr_compartment_name):
- certs = CertificatesClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ certs = CertificatesClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
for eachlbr in LBRs.data:
@@ -797,15 +797,13 @@ def print_prs(region, ct, values_for_column_prs, LBRs, lbr_compartment_name):
return values_for_column_prs
# Execution of the code begins here
-def export_lbr(inputfile, _outdir, service_dir, export_compartments, export_regions,_config,ct):
+def export_lbr(inputfile, outdir, service_dir, config1,signer1, ct, export_compartments, export_regions):
global tf_import_cmd
global sheet_dict
global importCommands
- global config
global values_for_vcninfo
global cd3file
global reg
- global outdir
global values_for_column_lhc
global values_for_column_bss
global values_for_column_lis
@@ -818,20 +816,15 @@ def export_lbr(inputfile, _outdir, service_dir, export_compartments, export_regi
global sheet_dict_rule
global sheet_dict_prs
global listener_to_cd3
+ global config,signer
+ signer=signer1
+ config=config1
cd3file = inputfile
if ('.xls' not in cd3file):
print("\nAcceptable cd3 format: .xlsx")
exit()
- outdir = _outdir
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
-
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'],"root",configFileName)
# Read CD3
df, values_for_column_lhc= commonTools.read_cd3(cd3file,"LB-Hostname-Certs")
@@ -867,14 +860,14 @@ def export_lbr(inputfile, _outdir, service_dir, export_compartments, export_regi
for reg in export_regions:
importCommands[reg].write("\n\n######### Writing import for Load Balancer Objects #########\n\n")
config.__setitem__("region", ct.region_dict[reg])
- lbr = LoadBalancerClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- vcn = VirtualNetworkClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ lbr = LoadBalancerClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ network = oci.core.VirtualNetworkClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+
region = reg.capitalize()
for compartment_name in export_compartments:
LBRs = oci.pagination.list_call_get_all_results(lbr.list_load_balancers,compartment_id=ct.ntk_compartment_ids[compartment_name],
lifecycle_state="ACTIVE")
- network = oci.core.VirtualNetworkClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
values_for_column_lhc = print_lbr_hostname_certs(region, ct, values_for_column_lhc, lbr, LBRs, compartment_name, network,service_dir)
values_for_column_lis = print_listener(region, ct, values_for_column_lis,LBRs,compartment_name)
values_for_column_bss = print_backendset_backendserver(region, ct, values_for_column_bss, lbr,LBRs,compartment_name)
diff --git a/cd3_automation_toolkit/Network/LoadBalancers/export_nlb_nonGreenField.py b/cd3_automation_toolkit/Network/LoadBalancers/export_nlb_nonGreenField.py
index cae93e361..5c451c2ba 100644
--- a/cd3_automation_toolkit/Network/LoadBalancers/export_nlb_nonGreenField.py
+++ b/cd3_automation_toolkit/Network/LoadBalancers/export_nlb_nonGreenField.py
@@ -134,7 +134,7 @@ def print_nlb_backendset_backendserver(region, ct, values_for_column_bss,NLBs, n
return values_for_column_bss
-def print_nlb_listener(region, ct, values_for_column_lis, NLBs, nlb_compartment_name,vcn):
+def print_nlb_listener(region, outdir, values_for_column_lis, NLBs, nlb_compartment_name,vcn):
for eachnlb in NLBs.data:
# Filter out the NLBs provisioned by oke
@@ -255,15 +255,13 @@ def print_nlb_listener(region, ct, values_for_column_lis, NLBs, nlb_compartment_
return values_for_column_lis
# Execution of the code begins here
-def export_nlb(inputfile, _outdir, service_dir, export_compartments, export_regions, _config,ct):
+def export_nlb(inputfile, outdir, service_dir, config,signer, ct, export_compartments, export_regions):
global tf_import_cmd
global sheet_dict
global importCommands
- global config
global values_for_vcninfo
global cd3file
global reg
- global outdir
global values_for_column_bss
global values_for_column_lis
global sheet_dict_bss
@@ -275,14 +273,6 @@ def export_nlb(inputfile, _outdir, service_dir, export_compartments, export_regi
print("\nAcceptable cd3 format: .xlsx")
exit()
- outdir = _outdir
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'],"root",configFileName)
-
# Read CD3
df, values_for_column_bss = commonTools.read_cd3(cd3file, "NLB-BackendSets-BackendServers")
df, values_for_column_lis = commonTools.read_cd3(cd3file, "NLB-Listeners")
@@ -311,9 +301,9 @@ def export_nlb(inputfile, _outdir, service_dir, export_compartments, export_regi
for reg in export_regions:
importCommands[reg].write("\n\n######### Writing import for Network Load Balancer Objects #########\n\n")
config.__setitem__("region", ct.region_dict[reg])
- nlb = NetworkLoadBalancerClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- vcn = VirtualNetworkClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- cmpt = ComputeClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ nlb = NetworkLoadBalancerClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ vcn = VirtualNetworkClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ cmpt = ComputeClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
region = reg.capitalize()
@@ -322,7 +312,7 @@ def export_nlb(inputfile, _outdir, service_dir, export_compartments, export_regi
NLBs = oci.pagination.list_call_get_all_results(nlb.list_network_load_balancers,compartment_id=ct.ntk_compartment_ids[compartment_name],
lifecycle_state="ACTIVE")
- values_for_column_lis = print_nlb_listener(region, ct, values_for_column_lis,NLBs,compartment_name,vcn)
+ values_for_column_lis = print_nlb_listener(region, outdir, values_for_column_lis,NLBs,compartment_name,vcn)
values_for_column_bss = print_nlb_backendset_backendserver(region, ct, values_for_column_bss,NLBs,compartment_name,cmpt,vcn,nlb)
commonTools.write_to_cd3(values_for_column_lis, cd3file, "NLB-Listeners")
diff --git a/cd3_automation_toolkit/OCI_Protocols b/cd3_automation_toolkit/OCI_Protocols
index 88e3da41e..972096180 100644
--- a/cd3_automation_toolkit/OCI_Protocols
+++ b/cd3_automation_toolkit/OCI_Protocols
@@ -83,7 +83,6 @@
81:VMTP
82:SECURE-VMTP
83:VINES
-84:TTP
84:IPTM
85:NSFNET-IGP
86:DGP
@@ -145,7 +144,8 @@
142:ROHC
143:Ethernet
144:AGGFRAG
-145-252:Unassigned
+145:NSH
+146-252:Unassigned
253:Use for experimentation and testing
254:Use for experimentation and testing
255:Reserved
diff --git a/cd3_automation_toolkit/OCI_Regions b/cd3_automation_toolkit/OCI_Regions
index 2c49c21d3..4e245dce9 100644
--- a/cd3_automation_toolkit/OCI_Regions
+++ b/cd3_automation_toolkit/OCI_Regions
@@ -1,7 +1,9 @@
#Region:Region_Key
+saltlake:us-saltlake-2
amsterdam:eu-amsterdam-1
stockholm:eu-stockholm-1
abudhabi:me-abudhabi-1
+bogota:sa-bogota-1
mumbai:ap-mumbai-1
paris:eu-paris-1
cardiff:uk-cardiff-1
@@ -29,6 +31,7 @@ santiago:sa-santiago-1
singapore:ap-singapore-1
sanjose:us-sanjose-1
sydney:ap-sydney-1
+valparaiso:sa-valparaiso-1
vinhedo:sa-vinhedo-1
chuncheon:ap-chuncheon-1
montreal:ca-montreal-1
diff --git a/cd3_automation_toolkit/Release-Notes b/cd3_automation_toolkit/Release-Notes
index 7fd3ccbfc..e74e09f0c 100644
--- a/cd3_automation_toolkit/Release-Notes
+++ b/cd3_automation_toolkit/Release-Notes
@@ -1,3 +1,15 @@
+-------------------------------------
+CD3 Automation Toolkit Tag v2024.1.0
+Jan 31st, 2024
+-------------------------------------
+1. Support for multiple Authentication Mechanisms for OCI SDK - API Key, Session Token, Instance Principal
+2. Support for toolkit via CI/CD pipelines for setUpOCI as well as terraform actions
+3. Support for Remote State Management for terraform state using Object Storage bucket
+4. Migrated oci terraform provider from hashicorp/oci to oracle/oci and updated to latest version
+5. Replaced parameter in setUpOCI.properties - 'non_gf_tenancy' with 'workflow_type'. Valid values - 'create_resources' or 'export_resources'
+6. Moved toolkit configuration files into /cd3user/tenancies//.config_files folder
+7. New versioning for the toolkit: ..
+
----------------------------------
CD3 Automation Toolkit Tag v12.1
----------------------------------
diff --git a/cd3_automation_toolkit/SDDC/create_terraform_sddc.py b/cd3_automation_toolkit/SDDC/create_terraform_sddc.py
index 17c31991a..815f204a1 100644
--- a/cd3_automation_toolkit/SDDC/create_terraform_sddc.py
+++ b/cd3_automation_toolkit/SDDC/create_terraform_sddc.py
@@ -11,13 +11,9 @@
from jinja2 import Environment, FileSystemLoader
# Execution of the code begins here
-def create_terraform_sddc(inputfile, outdir, service_dir, prefix, config):
+def create_terraform_sddc(inputfile, outdir, service_dir, prefix, ct):
tfStr = {}
filename = inputfile
- configFileName = config
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
-
ADS = ["AD1", "AD2", "AD3"]
@@ -91,7 +87,7 @@ def create_terraform_sddc(inputfile, outdir, service_dir, prefix, config):
if (len(df1.index) !=1):
print("SDDC " + sddc_name +" for region "+region +" does not have a single row in "+sheetNamenetwork + " sheet. Exiting!!!")
- exit()
+ exit(1)
# List of column headers
dfcolumns1 = df1.columns.values.tolist()
@@ -176,7 +172,7 @@ def create_terraform_sddc(inputfile, outdir, service_dir, prefix, config):
subnet_id = subnets.vcn_subnet_map[key][2]
except Exception as e:
print("Invalid Subnet Name specified for row " + str(i + 3) + ". It Doesnt exist in SubnetsVLANs sheet. Exiting!!!")
- exit()
+ exit(1)
tempdict = {'network_compartment_id': commonTools.check_tf_variable(network_compartment_id),
'vcn_name': vcn_name,'provisioning_subnet': subnet_id}
diff --git a/cd3_automation_toolkit/SDDC/export_sddc_nonGreenField.py b/cd3_automation_toolkit/SDDC/export_sddc_nonGreenField.py
index 1b3fc6d42..64c6943c7 100644
--- a/cd3_automation_toolkit/SDDC/export_sddc_nonGreenField.py
+++ b/cd3_automation_toolkit/SDDC/export_sddc_nonGreenField.py
@@ -14,8 +14,7 @@
from commonTools import *
-def get_volume_data(config, volume_id, ct):
- bvol = BlockstorageClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+def get_volume_data(bvol, volume_id, ct):
volume_data = bvol.get_volume(volume_id).data
vol_name = volume_data.display_name
comp_list = list(ct.ntk_compartment_ids.values())
@@ -23,26 +22,19 @@ def get_volume_data(config, volume_id, ct):
return vol_comp+'@'+vol_name
# Execution of the code begins here
-def export_sddc(inputfile, outdir, service_dir,config,ct, export_compartments=[], export_regions=[],display_names=[],ad_names=[]):
+def export_sddc(inputfile, outdir, service_dir,config,signer, ct, export_compartments=[], export_regions=[]):
cd3file = inputfile
if ('.xls' not in cd3file):
print("\nAcceptable cd3 format: .xlsx")
exit()
- configFileName = config
- config = oci.config.from_file(file_location=configFileName)
-
global importCommands, values_for_column_sddc, df, sheet_dict_sddc # declaring global variables
sheetName= "SDDCs"
sheetNameNetwork = "SDDCs-Network"
importCommands = {}
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'], "root", configFileName)
var_data = {}
AD = lambda ad: "AD1" if ("AD-1" in ad or "ad-1" in ad) else ("AD2" if ("AD-2" in ad or "ad-2" in ad) else ("AD3" if ("AD-3" in ad or "ad-3" in ad) else " NULL")) # Get shortend AD
@@ -75,8 +67,9 @@ def export_sddc(inputfile, outdir, service_dir,config,ct, export_compartments=[]
script_file = f'{outdir}/{reg}/{service_dir}/' + file_name
importCommands[reg].write("\n######### Writing import for VCNs #########\n")
config.__setitem__("region", ct.region_dict[reg])
- sddc_client = oci.ocvp.SddcClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- vnc = VirtualNetworkClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ sddc_client = oci.ocvp.SddcClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ vnc = VirtualNetworkClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ bvol = BlockstorageClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
region = reg.capitalize()
sddc_keys = {}
@@ -89,6 +82,10 @@ def export_sddc(inputfile, outdir, service_dir,config,ct, export_compartments=[]
mgmt_vols = []
wkld_vols = []
sddc = sddc_client.get_sddc(sddc_id=sddc.id).data
+ sddc_init_config1 = sddc.initial_configuration.initial_cluster_configurations
+ sddc_init_config=sddc_init_config1[0]
+ sddc_network=sddc_init_config.network_configuration
+ sddc_datastores= sddc_init_config.datastores
if sddc.lifecycle_state=='DELETED':
continue
@@ -100,8 +97,8 @@ def export_sddc(inputfile, outdir, service_dir,config,ct, export_compartments=[]
ssh_key = json.dumps(ssh_key)
sddc_keys[key_name] = ssh_key
importCommands[reg].write("\nterraform import \"module.sddcs[\\\"" + tf_name + "\\\"].oci_ocvp_sddc.sddc\" " + sddc.id)
- if 'Standard' in sddc.initial_host_shape_name:
- for item in sddc.datastores:
+ if 'Standard' in sddc_init_config.initial_host_shape_name:
+ for item in sddc_datastores:
if item.datastore_type == "MANAGEMENT":
mgmt_vols = item.block_volume_ids
if item.datastore_type == "WORKLOAD":
@@ -113,7 +110,7 @@ def export_sddc(inputfile, outdir, service_dir,config,ct, export_compartments=[]
elif (col_header == "Compartment Name"):
values_for_column_sddc[col_header].append(ntk_compartment_name)
elif ("Availability Domain" in col_header):
- value = sddc.__getattribute__(sheet_dict_sddc[col_header])
+ value = sddc_init_config.__getattribute__(sheet_dict_sddc[col_header])
ad = ""
if ("AD-1" in value or "ad-1" in value):
ad = "AD1"
@@ -125,63 +122,63 @@ def export_sddc(inputfile, outdir, service_dir,config,ct, export_compartments=[]
elif col_header == 'Management Block Volumes':
mgmt_vol_data = ""
for vol_id in mgmt_vols:
- mgmt_vol_data = mgmt_vol_data+","+get_volume_data(config, volume_id=vol_id, ct=ct)
+ mgmt_vol_data = mgmt_vol_data+","+get_volume_data(bvol, volume_id=vol_id, ct=ct)
values_for_column_sddc[col_header].append(mgmt_vol_data[1:])
elif col_header == 'Workload Block Volumes':
wkld_vol_data = ""
for vol_id in wkld_vols:
- wkld_vol_data = wkld_vol_data+","+get_volume_data(config, volume_id=vol_id, ct=ct)
+ wkld_vol_data = wkld_vol_data+","+get_volume_data(bvol, volume_id=vol_id, ct=ct)
values_for_column_sddc[col_header].append(wkld_vol_data[1:])
elif col_header == 'SSH Key Var Name':
values_for_column_sddc[col_header].append(key_name)
elif (col_header == "Provisioning Subnet"):
- subnet_id = sddc.provisioning_subnet_id
+ subnet_id = sddc_network.provisioning_subnet_id
subnet_info = vnc.get_subnet(subnet_id)
sub_name = subnet_info.data.display_name # Subnet-Name
vcn_name = vnc.get_vcn(subnet_info.data.vcn_id).data.display_name # vcn-Name
values_for_column_sddc[col_header].append(vcn_name+"_"+sub_name)
elif(col_header == "NSX Edge Uplink1 VLAN"):
- vlan_id = sddc.nsx_edge_uplink1_vlan_id
+ vlan_id = sddc_network.nsx_edge_uplink1_vlan_id
values_for_column_sddc[col_header].append(vnc.get_vlan(vlan_id).data.display_name)
elif (col_header == "NSX Edge Uplink1 VLAN"):
- vlan_id = sddc.nsx_edge_uplink1_vlan_id
+ vlan_id = sddc_network.nsx_edge_uplink1_vlan_id
values_for_column_sddc[col_header].append(vnc.get_vlan(vlan_id).data.display_name)
elif (col_header == "NSX Edge Uplink2 VLAN"):
- vlan_id = sddc.nsx_edge_uplink2_vlan_id
+ vlan_id = sddc_network.nsx_edge_uplink2_vlan_id
values_for_column_sddc[col_header].append(vnc.get_vlan(vlan_id).data.display_name)
elif (col_header == "NSX Edge VTEP VLAN"):
- vlan_id = sddc.nsx_edge_v_tep_vlan_id
+ vlan_id = sddc_network.nsx_edge_v_tep_vlan_id
values_for_column_sddc[col_header].append(vnc.get_vlan(vlan_id).data.display_name)
elif (col_header == "NSX VTEP VLAN"):
- vlan_id = sddc.nsx_v_tep_vlan_id
+ vlan_id = sddc_network.nsx_v_tep_vlan_id
values_for_column_sddc[col_header].append(vnc.get_vlan(vlan_id).data.display_name)
elif (col_header == "vMotion VLAN"):
- vlan_id = sddc.vmotion_vlan_id
+ vlan_id = sddc_network.vmotion_vlan_id
values_for_column_sddc[col_header].append(vnc.get_vlan(vlan_id).data.display_name)
elif (col_header == "vSAN VLAN"):
- vlan_id = sddc.vsan_vlan_id
+ vlan_id = sddc_network.vsan_vlan_id
values_for_column_sddc[col_header].append(vnc.get_vlan(vlan_id).data.display_name)
elif (col_header == "vSphere VLAN"):
- vlan_id = sddc.vsphere_vlan_id
+ vlan_id = sddc_network.vsphere_vlan_id
values_for_column_sddc[col_header].append(vnc.get_vlan(vlan_id).data.display_name)
elif (col_header == "HCX VLAN"):
- vlan_id = sddc.hcx_vlan_id
+ vlan_id = sddc_network.hcx_vlan_id
if vlan_id == None:
values_for_column_sddc[col_header].append("")
else:
values_for_column_sddc[col_header].append(vnc.get_vlan(vlan_id).data.display_name)
elif (col_header == "Replication Net VLAN"):
- vlan_id = sddc.replication_vlan_id
+ vlan_id = sddc_network.replication_vlan_id
values_for_column_sddc[col_header].append(vnc.get_vlan(vlan_id).data.display_name)
elif (col_header == "Provisioning Net VLAN"):
- vlan_id = sddc.provisioning_vlan_id
+ vlan_id = sddc_network.provisioning_vlan_id
values_for_column_sddc[col_header].append(vnc.get_vlan(vlan_id).data.display_name)
elif col_header.lower() in commonTools.tagColumns:
values_for_column_sddc = commonTools.export_tags(sddc, col_header,
values_for_column_sddc)
else:
- oci_objs = [sddc]
+ oci_objs = [sddc,sddc_init_config,sddc_network,sddc_datastores]
values_for_column_sddc = commonTools.export_extra_columns(oci_objs, col_header,
sheet_dict_sddc,
values_for_column_sddc)
diff --git a/cd3_automation_toolkit/Security/CloudGuard/enable_terraform_cloudguard.py b/cd3_automation_toolkit/Security/CloudGuard/enable_terraform_cloudguard.py
index 91282a08b..4e3c20f8a 100644
--- a/cd3_automation_toolkit/Security/CloudGuard/enable_terraform_cloudguard.py
+++ b/cd3_automation_toolkit/Security/CloudGuard/enable_terraform_cloudguard.py
@@ -18,10 +18,8 @@
# Required Inputs- Config file, prefix AND outdir
######
# Execution of the code begins here
-def enable_cis_cloudguard(outdir, service_dir,prefix,region, config=DEFAULT_LOCATION):
- configFileName = config
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
+def enable_cis_cloudguard(outdir, service_dir,prefix, ct, region):
+
#home_region=ct.home_region
region_key = ct.region_dict[region]
diff --git a/cd3_automation_toolkit/Security/KeyVault/create_terraform_keyvault.py b/cd3_automation_toolkit/Security/KeyVault/create_terraform_keyvault.py
index 0aadfbb10..e3eb3dea4 100644
--- a/cd3_automation_toolkit/Security/KeyVault/create_terraform_keyvault.py
+++ b/cd3_automation_toolkit/Security/KeyVault/create_terraform_keyvault.py
@@ -18,19 +18,15 @@
# Required Inputs- Config file, prefix AND outdir
######
# Execution of the code begins here
-def create_cis_keyvault(outdir, service_dir, service_dir_iam, prefix, region_name, comp_name, config=DEFAULT_LOCATION):
+def create_cis_keyvault(outdir, service_dir, service_dir_iam, prefix, ct, region_name, comp_name):
# Declare variables
- configFileName = config
region_name = region_name.strip().lower()
comp_name = comp_name.strip()
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
-
if region_name not in ct.all_regions:
print("Invalid Region!! Tenancy is not subscribed to this region. Please try again")
- exit()
+ exit(1)
# Load the template file
diff --git a/cd3_automation_toolkit/Storage/BlockVolume/create_terraform_block_volumes.py b/cd3_automation_toolkit/Storage/BlockVolume/create_terraform_block_volumes.py
index a22fe4dff..64dd6effe 100644
--- a/cd3_automation_toolkit/Storage/BlockVolume/create_terraform_block_volumes.py
+++ b/cd3_automation_toolkit/Storage/BlockVolume/create_terraform_block_volumes.py
@@ -21,15 +21,12 @@
# Required Inputs-CD3 excel file, Config file, prefix AND outdir
######
# Execution of the code begins here
-def create_terraform_block_volumes(inputfile, outdir, service_dir, prefix,config=DEFAULT_LOCATION):
+def create_terraform_block_volumes(inputfile, outdir, service_dir, prefix,ct):
filename = inputfile
- configFileName = config
tfStr = {}
sheetName="BlockVolumes"
auto_tfvars_filename = prefix + '_' + sheetName.lower() + '.auto.tfvars'
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
ADS = ["AD1", "AD2", "AD3"]
diff --git a/cd3_automation_toolkit/Storage/BlockVolume/export_blockvolumes_nonGreenField.py b/cd3_automation_toolkit/Storage/BlockVolume/export_blockvolumes_nonGreenField.py
index 1b0bd046f..0f81f2f82 100644
--- a/cd3_automation_toolkit/Storage/BlockVolume/export_blockvolumes_nonGreenField.py
+++ b/cd3_automation_toolkit/Storage/BlockVolume/export_blockvolumes_nonGreenField.py
@@ -128,15 +128,13 @@ def print_blockvolumes(region, BVOLS, bvol, compute, ct, values_for_column, ntk_
values_for_column = commonTools.export_extra_columns(oci_objs, col_header, sheet_dict, values_for_column)
# Execution of the code begins here
-def export_blockvolumes(inputfile, _outdir, service_dir, _config, ct, export_compartments=[], export_regions=[], display_names = [], ad_names = []):
+def export_blockvolumes(inputfile, outdir, service_dir, config, signer, ct, export_compartments=[], export_regions=[], display_names = [], ad_names = []):
global tf_import_cmd
global sheet_dict
global importCommands
- global config
global values_for_vcninfo
global cd3file
global reg
- global outdir
global values_for_column
cd3file = inputfile
@@ -144,17 +142,7 @@ def export_blockvolumes(inputfile, _outdir, service_dir, _config, ct, export_com
print("\nAcceptable cd3 format: .xlsx")
exit()
-
- outdir = _outdir
- configFileName = _config
- config = oci.config.from_file(file_location=configFileName)
-
sheetName = "BlockVolumes"
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'],"root",configFileName)
-
# Read CD3
df, values_for_column= commonTools.read_cd3(cd3file,sheetName)
@@ -183,8 +171,8 @@ def export_blockvolumes(inputfile, _outdir, service_dir, _config, ct, export_com
importCommands[reg].write("\n\n######### Writing import for Block Volumes #########\n\n")
config.__setitem__("region", ct.region_dict[reg])
region = reg.capitalize()
- compute = ComputeClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- bvol = BlockstorageClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ compute = ComputeClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ bvol = BlockstorageClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
for ntk_compartment_name in export_compartments:
BVOLS = oci.pagination.list_call_get_all_results(bvol.list_volumes,compartment_id=ct.ntk_compartment_ids[ntk_compartment_name],lifecycle_state="AVAILABLE")
diff --git a/cd3_automation_toolkit/Storage/FileSystem/create_terraform_fss.py b/cd3_automation_toolkit/Storage/FileSystem/create_terraform_fss.py
index a055d1dd6..de5758a42 100644
--- a/cd3_automation_toolkit/Storage/FileSystem/create_terraform_fss.py
+++ b/cd3_automation_toolkit/Storage/FileSystem/create_terraform_fss.py
@@ -20,13 +20,11 @@
# If input is csv file; convert to excel
# Execution of the code begins here
-def create_terraform_fss(inputfile, outdir, service_dir, prefix,config=DEFAULT_LOCATION):
+def create_terraform_fss(inputfile, outdir, service_dir, prefix,ct):
filename = inputfile
- configFileName = config
sheetName = "FSS"
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
+
auto_tfvars_filename = prefix + '_' + sheetName.lower() + '.auto.tfvars'
# Load the template file
@@ -206,7 +204,7 @@ def fss_exports(i, df, tempStr):
subnet_id = subnets.vcn_subnet_map[key][2]
except Exception as e:
print("Invalid Subnet Name specified for row " + str(i + 3) + ". It Doesnt exist in Subnets sheet. Exiting!!!")
- exit()
+ exit(1)
tempdict = {'network_compartment_id': commonTools.check_tf_variable(network_compartment_id), 'vcn_name': vcn_name,
'subnet_id': subnet_id}
diff --git a/cd3_automation_toolkit/Storage/FileSystem/export_fss_nonGreenField.py b/cd3_automation_toolkit/Storage/FileSystem/export_fss_nonGreenField.py
index 66c86f361..84529c5b7 100644
--- a/cd3_automation_toolkit/Storage/FileSystem/export_fss_nonGreenField.py
+++ b/cd3_automation_toolkit/Storage/FileSystem/export_fss_nonGreenField.py
@@ -54,10 +54,10 @@ def add_column_data(reg, cname, AD_name, mt_display_name, vplussubnet, mnt_p_ip,
values_for_column_fss)
-def __get_mount_info(cname, compartment_id, reg, availability_domain_name, config):
- file_system = oci.file_storage.FileStorageClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- network = oci.core.VirtualNetworkClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- vnc_info = oci.core.VirtualNetworkClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+def __get_mount_info(cname, compartment_id, reg, availability_domain_name,signer):
+ file_system = oci.file_storage.FileStorageClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ network = oci.core.VirtualNetworkClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
+ vnc_info = oci.core.VirtualNetworkClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
global exports_ids
AD_name = AD(availability_domain_name)
try:
@@ -155,7 +155,7 @@ def __get_mount_info(cname, compartment_id, reg, availability_domain_name, confi
pass
# Execution of the code begins here
-def export_fss(inputfile, outdir, service_dir, ct, config=DEFAULT_LOCATION, export_compartments=[], export_regions=[]):
+def export_fss(inputfile, outdir, service_dir, config1, signer1, ct, export_compartments=[], export_regions=[]):
input_compartment_names = export_compartments
cd3file = inputfile
@@ -164,18 +164,13 @@ def export_fss(inputfile, outdir, service_dir, ct, config=DEFAULT_LOCATION, expo
exit()
sheetName = "FSS"
- configFileName = config
- config = oci.config.from_file(file_location=configFileName)
- global file_system, vnc_info, importCommands, rows, all_ads, input_compartment_list, AD, df, values_for_column_fss, sheet_dict_instances
+ global file_system, vnc_info, importCommands, rows, all_ads, input_compartment_list, AD, df, values_for_column_fss, sheet_dict_instances, config, signer
+ config=config1
+ signer=signer1
+ file_system = oci.file_storage.FileStorageClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
- file_system = oci.file_storage.FileStorageClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'], "root", configFileName)
-
- vnc_info = oci.core.VirtualNetworkClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ vnc_info = oci.core.VirtualNetworkClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
importCommands = {}
rows = []
all_ads = []
@@ -192,7 +187,7 @@ def export_fss(inputfile, outdir, service_dir, ct, config=DEFAULT_LOCATION, expo
# Fetch all ADs in all Subscribed Regions
for reg in export_regions:
config.__setitem__("region", ct.region_dict[reg])
- ads = oci.identity.IdentityClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ ads = oci.identity.IdentityClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
for aval in ads.list_availability_domains(compartment_id=config['tenancy']).data:
all_ads.append(aval.name)
@@ -211,9 +206,9 @@ def export_fss(inputfile, outdir, service_dir, ct, config=DEFAULT_LOCATION, expo
for reg in export_regions:
config.__setitem__("region", ct.region_dict[reg])
for ntk_compartment_name in export_compartments:
- ads = oci.identity.IdentityClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ ads = oci.identity.IdentityClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
for aval in ads.list_availability_domains(compartment_id=config['tenancy']).data:
- __get_mount_info(ntk_compartment_name, ct.ntk_compartment_ids[ntk_compartment_name], reg, aval.name, config)
+ __get_mount_info(ntk_compartment_name, ct.ntk_compartment_ids[ntk_compartment_name], reg, aval.name,signer)
commonTools.write_to_cd3(values_for_column_fss, cd3file, sheetName)
diff --git a/cd3_automation_toolkit/Storage/FileSystem/templates/fss-template b/cd3_automation_toolkit/Storage/FileSystem/templates/fss-template
index d6df2b65d..26990f673 100644
--- a/cd3_automation_toolkit/Storage/FileSystem/templates/fss-template
+++ b/cd3_automation_toolkit/Storage/FileSystem/templates/fss-template
@@ -24,8 +24,8 @@ fss = {
#Optional
display_name = "{{ fss_name }}"
- {% if kms_key_name and kms_key_name != "" %}
- kms_key_name = "{{ kms_key_name }}"
+ {% if kms_key_id and kms_key_id != "" %}
+ kms_key_id = "{{ kms_key_id }}"
{% endif %}
{% if source_snapshot_name and source_snapshot_name != "" %}
diff --git a/cd3_automation_toolkit/Storage/ObjectStorage/create_terraform_oss.py b/cd3_automation_toolkit/Storage/ObjectStorage/create_terraform_oss.py
index b63d07232..de2a30571 100644
--- a/cd3_automation_toolkit/Storage/ObjectStorage/create_terraform_oss.py
+++ b/cd3_automation_toolkit/Storage/ObjectStorage/create_terraform_oss.py
@@ -18,19 +18,15 @@
######
# Execution of the code begins here
-def create_terraform_oss(inputfile, outdir, service_dir, prefix,config):
+def create_terraform_oss(inputfile, outdir, service_dir, prefix, ct):
# Declare variables
filename = inputfile
- configFileName = config
prefix = prefix
outdir = outdir
#Get subscribed regions and home region
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
-
# Load the template file
file_loader = FileSystemLoader(f'{Path(__file__).parent}/templates')
env = Environment(loader = file_loader, keep_trailing_newline = True, trim_blocks = True, lstrip_blocks = True)
@@ -180,7 +176,7 @@ def create_terraform_oss(inputfile, outdir, service_dir, prefix,config):
if str(df.loc[i, 'Region']).lower() == 'nan' or str(df.loc[i, 'Compartment Name']).lower() == 'nan' or str(df.loc[i, 'Bucket Name']).lower() == 'nan':
print("\nThe values for Region, Compartment Name and Bucket Name cannot be left empty. Please enter a value and try again !!")
- exit()
+ exit(1)
for columnname in dfcolumns:
# Column value
diff --git a/cd3_automation_toolkit/Storage/ObjectStorage/export_terraform_oss.py b/cd3_automation_toolkit/Storage/ObjectStorage/export_terraform_oss.py
index fcae09974..fce654d6f 100644
--- a/cd3_automation_toolkit/Storage/ObjectStorage/export_terraform_oss.py
+++ b/cd3_automation_toolkit/Storage/ObjectStorage/export_terraform_oss.py
@@ -149,14 +149,12 @@ def print_buckets(region, outdir, service_dir, bucket_data, values_for_column, n
# Required Inputs- CD3 excel file, Config file, prefix AND outdir
######
# Execution of the code begins here
-def export_buckets(inputfile, _outdir, service_dir, ct, _config=DEFAULT_LOCATION, export_compartments=[],export_regions=[]):
+def export_buckets(inputfile, outdir, service_dir, config, signer, ct, export_compartments=[],export_regions=[]):
global tf_import_cmd
global sheet_dict
global importCommands
- global config
global cd3file
global reg
- global outdir
global values_for_column
cd3file = inputfile
@@ -165,15 +163,7 @@ def export_buckets(inputfile, _outdir, service_dir, ct, _config=DEFAULT_LOCATION
exit()
# Declare variables
- configFileName = _config
- outdir = _outdir
- config = oci.config.from_file(file_location=configFileName)
-
sheetName = "Buckets"
- if ct==None:
- ct = commonTools()
- ct.get_subscribedregions(configFileName)
- ct.get_network_compartment_ids(config['tenancy'], "root", configFileName)
# Read CD3
df, values_for_column = commonTools.read_cd3(cd3file, sheetName)
@@ -205,7 +195,7 @@ def export_buckets(inputfile, _outdir, service_dir, ct, _config=DEFAULT_LOCATION
importCommands[reg].write("\n\n######### Writing import for Buckets #########\n\n")
config.__setitem__("region", ct.region_dict[reg])
region = reg.capitalize()
- buckets_client = ObjectStorageClient(config, retry_strategy = oci.retry.DEFAULT_RETRY_STRATEGY)
+ buckets_client = ObjectStorageClient(config=config, retry_strategy = oci.retry.DEFAULT_RETRY_STRATEGY, signer=signer)
namespace = buckets_client.get_namespace().data
namespace_name = namespace
for ntk_compartment_name in export_compartments:
diff --git a/cd3_automation_toolkit/Storage/ObjectStorage/templates/oss-template b/cd3_automation_toolkit/Storage/ObjectStorage/templates/oss-template
index 2e94e01e9..7ed8e9a5f 100644
--- a/cd3_automation_toolkit/Storage/ObjectStorage/templates/oss-template
+++ b/cd3_automation_toolkit/Storage/ObjectStorage/templates/oss-template
@@ -32,6 +32,8 @@ buckets = {
{% if kms_key_id and kms_key_id != "" %}
kms_key_id = "{{ kms_key_id }}"
+ {% else %}
+ kms_key_id = null
{% endif %}
{% if auto_tiering and auto_tiering != "" %}
diff --git a/cd3_automation_toolkit/cd3Validator.py b/cd3_automation_toolkit/cd3Validator.py
index 893e01464..5c00369a5 100644
--- a/cd3_automation_toolkit/cd3Validator.py
+++ b/cd3_automation_toolkit/cd3Validator.py
@@ -15,12 +15,12 @@
from oci.core.virtual_network_client import VirtualNetworkClient
from commonTools import *
-
+'''
def get_vcn_ids(compartment_ids, config):
# Fetch the VCN ID
for region in ct.all_regions:
config.__setitem__("region", ct.region_dict[region])
- vnc = VirtualNetworkClient(config)
+ vnc = VirtualNetworkClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
for comp_id in compartment_ids.values():
vcn_list = oci.pagination.list_call_get_all_results(vnc.list_vcns, compartment_id=comp_id)
for vcn in vcn_list.data:
@@ -28,32 +28,6 @@ def get_vcn_ids(compartment_ids, config):
vcn_ids[vcn.display_name] = vcn.id
return vcn_ids
-# Check for unique values across two sheets
-def compare_values(list_to_check,value_to_check,index):
- if (value_to_check not in list_to_check):
- if 'Availability Domain(AD1|AD2|AD3)' in index[1]:
- log(f'ROW {index[0] + 3} : Invalid value for column "{index[1]}".')
- else:
- log(f'ROW {index[0] + 3} : Invalid value for column "{index[1]}". {value_to_check} does not exist in {index[2]} tab.')
- return True
- return False
-
-# Checks for special characters in dns_label name
-def checklabel(lable, count):
- present = False
- lable = str(lable).strip()
- if (lable == "Nan") or (lable == "") or (lable == "NaN") or (lable == "nan"):
- pass
- else:
- regex = re.compile('[@_!#$%^&* ()<>?/\|}{~:]')
- if (regex.search(lable) == None):
- pass
- else:
- log(f'ROW {count+2} : "DNS Label" value has special characters.')
- present = True
- return present
-
-
# Shows LPG Peering that will be established based on hub_spoke_peer_none column
def showPeering(vcnsob):
present = False
@@ -83,6 +57,33 @@ def showPeering(vcnsob):
return present
+'''
+
+# Check for unique values across two sheets
+def compare_values(list_to_check,value_to_check,index):
+ if (value_to_check not in list_to_check):
+ if 'Availability Domain(AD1|AD2|AD3)' in index[1]:
+ log(f'ROW {index[0] + 3} : Invalid value for column "{index[1]}".')
+ else:
+ log(f'ROW {index[0] + 3} : Invalid value for column "{index[1]}". {value_to_check} does not exist in {index[2]} tab.')
+ return True
+ return False
+
+# Checks for special characters in dns_label name
+def checklabel(lable, count):
+ present = False
+ lable = str(lable).strip()
+ if (lable == "Nan") or (lable == "") or (lable == "NaN") or (lable == "nan"):
+ pass
+ else:
+ regex = re.compile('[@_!#$%^&* ()<>?/\|}{~:]')
+ if (regex.search(lable) == None):
+ pass
+ else:
+ log(f'ROW {count+2} : "DNS Label" value has special characters.')
+ present = True
+ return present
+
# Checks for duplicates
def checkIfDuplicates(listOfElems):
@@ -366,8 +367,8 @@ def validate_subnets(filename, comp_ids, vcnobj):
# Check if VCNs tab is compliant
-def validate_vcns(filename, comp_ids, vcnobj, config): # ,vcn_cidrs,vcn_compartment_ids):
- vcn_ids = get_vcn_ids(comp_ids, config)
+def validate_vcns(filename, comp_ids, vcnobj):# config): # ,vcn_cidrs,vcn_compartment_ids):
+ #vcn_ids = get_vcn_ids(comp_ids, config)
dfv = data_frame(filename, 'VCNs')
@@ -381,6 +382,7 @@ def validate_vcns(filename, comp_ids, vcnobj, config): # ,vcn_cidrs,vcn_compart
vcn_reg_check = False
vcn_vcnname_check = False
vcn_dns_length = False
+ vcn_peer_check = False
vcn_check = False
@@ -467,6 +469,7 @@ def validate_vcns(filename, comp_ids, vcnobj, config): # ,vcn_cidrs,vcn_compart
print("VCN CIDRs Check failed!!")
log("End VCN CIDRs Check--------------------------------------\n")
+ '''
log("Start LPG Peering Check---------------------------------------------")
log("Current Status of LPGs in OCI for each VCN listed in VCNs tab:")
oci_vcn_lpgs = {}
@@ -501,7 +504,7 @@ def validate_vcns(filename, comp_ids, vcnobj, config): # ,vcn_cidrs,vcn_compart
vcn_lpg_str = ""
config.__setitem__("region", ct.region_dict[region])
- vnc = oci.core.VirtualNetworkClient(config)
+ vnc = oci.core.VirtualNetworkClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
lpg_list = vnc.list_local_peering_gateways(compartment_id=comp_id, vcn_id=vcn_id)
@@ -522,6 +525,7 @@ def validate_vcns(filename, comp_ids, vcnobj, config): # ,vcn_cidrs,vcn_compart
log("Link: https://confluence.oraclecorp.com/confluence/display/NAC/Support+for+Non-GreenField+Tenancies")
log("End LPG Peering Check---------------------------------------------\n")
+ '''
return vcn_check, vcn_cidr_check, vcn_peer_check
@@ -1083,6 +1087,8 @@ def validate_compartments(filename):
for i in dfcomp.index:
region = str(dfcomp.loc[i, 'Region']).strip().lower()
+ parent_comp = str(dfcomp.loc[i, 'Parent Compartment']).strip().split("::")[-1]
+
# Encountered
if (region in commonTools.endNames):
break
@@ -1095,6 +1101,10 @@ def validate_compartments(filename):
if str(dfcomp.loc[i, 'Name']).strip().lower() == 'nan':
log(f'ROW {i+3} : Empty value at column "Name".')
comp_empty_check = True
+ if(str(dfcomp.loc[i, 'Name']).strip() == parent_comp):
+ log(f'ROW {i + 3} : Name cannot be same as Parent Compartment Name')
+ comp_invalid_check = True
+
if (comp_empty_check == True or parent_comp_check == True or comp_invalid_check == True):
print("Null or Wrong value Check failed!!")
@@ -1201,6 +1211,15 @@ def validate_policies(filename,comp_ids):
log(f'ROW {i+3} : Empty value at column "Region" and "Name".')
policies_empty_check = True
+ statement = str(dfp.loc[i, 'Policy Statements']).strip().lower()
+ words = statement.split()
+ if ('to' in words):
+ verb = words[words.index('to') + 1]
+ if verb not in ['inspect', 'read', 'use', 'manage']:
+ log(f'ROW {i + 3} : Invalid verb used in Policy Statement')
+ policies_invalid_check = True
+
+
if policies_empty_check == True or policies_comp_check == True or policies_invalid_check == True:
print("Null or Wrong value Check failed!!")
return True
@@ -1244,11 +1263,19 @@ def validate_tags(filename,comp_ids):
dfcolumns = dftag.columns.values.tolist()
for columnname in dfcolumns:
+ columnvalue = str(dftag[columnname][i])
# Column value
- if 'description' in columnname.lower():
- columnvalue = str(dftag[columnname][i])
- else:
- columnvalue = str(dftag[columnname][i]).strip()
+ if columnname == "Tag Description":
+ columnvalue = columnvalue.lower()
+ if str(dftag.loc[i, 'Tag Keys']).strip().lower() != 'nan':
+ if columnvalue == '' or columnvalue == 'nan':
+ log(f'ROW {i + 3} : Empty value at column "Tag Description".')
+ tag_empty_check = True
+ if columnname == "Namespace Description":
+ columnvalue = columnvalue.lower()
+ if columnvalue == '' or columnvalue == 'nan':
+ log(f'ROW {i + 3} : Empty value at column "Namespace Description".')
+ tag_empty_check = True
if columnname == 'Tag Namespace':
columnvalue = str(columnvalue).strip()
@@ -1259,6 +1286,9 @@ def validate_tags(filename,comp_ids):
if ' ' in columnvalue or '.' in columnvalue:
log(f'ROW {i+3} : Spaces and Periods are not allowed in Tag Namespaces.')
tag_invalid_check = True
+ if columnvalue.lower().startswith('oci') or columnvalue.lower().startswith('orcl'):
+ log(f'ROW {i + 3} : Tag Namespaces cannot start with oci or orcl')
+ tag_invalid_check = True
if columnname == 'Tag Keys':
columnvalue = str(columnvalue).strip()
@@ -1269,6 +1299,9 @@ def validate_tags(filename,comp_ids):
if ' ' in columnvalue or '.' in columnvalue:
log(f'ROW {i+3} : Spaces and Periods are not allowed in Tag Keys.')
tag_invalid_check = True
+ if columnvalue.lower().startswith('oci') or columnvalue.lower().startswith('orcl'):
+ log(f'ROW {i + 3} : Tag Definition Names cannot start with oci or orcl')
+ tag_invalid_check = True
if (tag_empty_check == True or tag_invalid_check == True or tag_comp_check == True):
print("Null or Wrong value Check failed!!")
@@ -1495,7 +1528,7 @@ def validate_buckets(filename, comp_ids):
else:
return False
-def validate_cd3(filename, var_file, prefix, outdir, choices, configFileName):
+def validate_cd3(choices, filename, var_file, prefix, outdir, ct1): #config1, signer1, ct1):
CD3_LOG_LEVEL = 60
logging.addLevelName(CD3_LOG_LEVEL, "custom")
file=prefix+"_cd3Validator.log"
@@ -1507,8 +1540,10 @@ def validate_cd3(filename, var_file, prefix, outdir, choices, configFileName):
global log
log = partial(logger.log, CD3_LOG_LEVEL)
- global ct
- ct = commonTools()
+ global ct #, config, signer
+ ct=ct1
+ #config=config1
+ #signer =signer1
global compartment_ids
compartment_ids = {}
global vcn_ids
@@ -1537,9 +1572,8 @@ def validate_cd3(filename, var_file, prefix, outdir, choices, configFileName):
if not os.path.exists(filename):
print("\nCD3 excel sheet not found at "+filename +"\nExiting!!")
- exit()
- config = oci.config.from_file(file_location=configFileName)
- ct.get_subscribedregions(configFileName)
+ exit(1)
+
#ct.get_network_compartment_ids(config['tenancy'], "root", configFileName)
print("Getting Compartments OCIDs...")
ct.get_compartment_map(var_file,'Validator')
@@ -1577,8 +1611,10 @@ def validate_cd3(filename, var_file, prefix, outdir, choices, configFileName):
val_net=True
log("\n============================= Verifying VCNs Tab ==========================================\n")
+ log("\n====================== Note: LPGs will npt be verified ====================================\n")
print("\nProcessing VCNs Tab..")
- vcn_check, vcn_cidr_check, vcn_peer_check = validate_vcns(filename, ct.ntk_compartment_ids, vcnobj, config)
+ print("NOTE: LPGs will not be verified")
+ vcn_check, vcn_cidr_check, vcn_peer_check = validate_vcns(filename, ct.ntk_compartment_ids, vcnobj) #, config)
log("============================= Verifying SubnetsVLANs Tab ==========================================\n")
print("\nProcessing SubnetsVLANs Tab..")
diff --git a/cd3_automation_toolkit/cis_reports.py b/cd3_automation_toolkit/cis_reports.py
index b12f38f12..9adbfe59f 100644
--- a/cd3_automation_toolkit/cis_reports.py
+++ b/cd3_automation_toolkit/cis_reports.py
@@ -1 +1,5351 @@
-##########################################################################
# Copyright (c) 2016, 2023, Oracle and/or its affiliates. All rights reserved.
# This software is dual-licensed to you under the Universal Permissive License (UPL) 1.0 as shown at https://oss.oracle.com/licenses/upl or Apache License 2.0 as shown at http://www.apache.org/licenses/LICENSE-2.0. You may choose either license.
#
# cis_reports.py
# @author base: Adi Zohar
# @author: Josh Hammer, Andre Correa, Chad Russell, Jake Bloom and Olaf Heimburger
#
# Supports Python 3 and above
#
# coding: utf-8
##########################################################################
from __future__ import print_function
import concurrent.futures
import sys
import argparse
import datetime
import pytz
import oci
import json
import os
import csv
import itertools
from threading import Thread
import hashlib
import re
import requests
try:
from xlsxwriter.workbook import Workbook
import glob
OUTPUT_TO_XLSX = True
except Exception:
OUTPUT_TO_XLSX = False
RELEASE_VERSION = "2.6.4"
PYTHON_SDK_VERSION = "'2.110.0"
UPDATED_DATE = "September 18, 2023"
##########################################################################
# debug print
##########################################################################
# DEBUG = False
def debug(msg):
if DEBUG:
print(msg)
##########################################################################
# Print header centered
##########################################################################
def print_header(name):
chars = int(90)
print('')
print('#' * chars)
print('#' + name.center(chars - 2, ' ') + '#')
print('#' * chars)
##########################################################################
# show_version
##########################################################################
def show_version(verbose=False):
script_version = f'CIS Reports - Release {RELEASE_VERSION}'
script_updated = f'Version {RELEASE_VERSION} Updated on {UPDATED_DATE}'
if verbose:
print_header('Running ' + script_version)
print(script_updated)
print('Please use --help for more info')
print('\nTested oci-python-sdk version: ' + PYTHON_SDK_VERSION)
print('Installed oci-python-sdk version: ' + str(oci.__version__))
else:
print(script_updated)
##########################################################################
# CIS Reporting Class
##########################################################################
class CIS_Report:
# Class variables
_DAYS_OLD = 90
__KMS_DAYS_OLD = 365
__home_region = []
# Time Format
__iso_time_format = "%Y-%m-%dT%H:%M:%S"
# OCI Link
__oci_cloud_url = "https://cloud.oracle.com"
__oci_users_uri = __oci_cloud_url + "/identity/users/"
__oci_policies_uri = __oci_cloud_url + "/identity/policies/"
__oci_groups_uri = __oci_cloud_url + "/identity/groups/"
__oci_dynamic_groups_uri = __oci_cloud_url + "/identity/dynamicgroups/"
__oci_buckets_uri = __oci_cloud_url + "/object-storage/buckets/"
__oci_boot_volumes_uri = __oci_cloud_url + "/block-storage/boot-volumes/"
__oci_block_volumes_uri = __oci_cloud_url + "/block-storage/volumes/"
__oci_fss_uri = __oci_cloud_url + "/fss/file-systems/"
__oci_networking_uri = __oci_cloud_url + "/networking/vcns/"
__oci_adb_uri = __oci_cloud_url + "/db/adb/"
__oci_oicinstance_uri = __oci_cloud_url + "/oic/integration-instances/"
__oci_oacinstance_uri = __oci_cloud_url + "/analytics/instances/"
__oci_compartment_uri = __oci_cloud_url + "/identity/compartments/"
__oci_drg_uri = __oci_cloud_url + "/networking/drgs/"
__oci_cpe_uri = __oci_cloud_url + "/networking/cpes/"
__oci_ipsec_uri = __oci_cloud_url + "/networking/vpn-connections/"
__oci_events_uri = __oci_cloud_url + "/events/rules/"
__oci_loggroup_uri = __oci_cloud_url + "/logging/log-groups/"
__oci_vault_uri = __oci_cloud_url + "/security/kms/vaults/"
__oci_budget_uri = __oci_cloud_url + "/usage/budgets/"
__oci_cgtarget_uri = __oci_cloud_url + "/cloud-guard/targets/"
__oci_onssub_uri = __oci_cloud_url + "/notification/subscriptions/"
__oci_serviceconnector_uri = __oci_cloud_url + "/connector-hub/service-connectors/"
__oci_fastconnect_uri = __oci_cloud_url + "/networking/fast-connect/virtual-circuit/"
__oci_ocid_pattern = r'ocid1\.[a-z,0-9]*\.[a-z,0-9]*\.[a-z,0-9,-]*\.[a-z,0-9,\.]{20,}'
# Start print time info
start_datetime = datetime.datetime.now().replace(tzinfo=pytz.UTC)
start_time_str = str(start_datetime.strftime(__iso_time_format))
report_datetime = str(start_datetime.strftime("%Y-%m-%d_%H-%M-%S"))
# For User based key checks
api_key_time_max_datetime = start_datetime - datetime.timedelta(days=_DAYS_OLD)
str_api_key_time_max_datetime = api_key_time_max_datetime.strftime(__iso_time_format)
api_key_time_max_datetime = datetime.datetime.strptime(str_api_key_time_max_datetime, __iso_time_format)
# For KMS check
kms_key_time_max_datetime = start_datetime - datetime.timedelta(days=__KMS_DAYS_OLD)
str_kms_key_time_max_datetime = kms_key_time_max_datetime.strftime(__iso_time_format)
kms_key_time_max_datetime = datetime.datetime.strptime(str_kms_key_time_max_datetime, __iso_time_format)
def __init__(self, config, signer, proxy, output_bucket, report_directory, print_to_screen, regions_to_run_in, raw_data, obp, redact_output, debug=False):
# CIS Foundation benchmark 1.2
self.cis_foundations_benchmark_1_2 = {
'1.1': {'section': 'Identity and Access Management', 'recommendation_#': '1.1', 'Title': 'Ensure service level admins are created to manage resources of particular service', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['5.4', '6.7'], 'CCCS Guard Rail': '2,3', 'Remediation': []},
'1.2': {'section': 'Identity and Access Management', 'recommendation_#': '1.2', 'Title': 'Ensure permissions on all resources are given only to the tenancy administrator group', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['3.3'], 'CCCS Guard Rail': '1,2,3', 'Remediation': []},
'1.3': {'section': 'Identity and Access Management', 'recommendation_#': '1.3', 'Title': 'Ensure IAM administrators cannot update tenancy Administrators group', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['3.3', '5.4'], 'CCCS Guard Rail': '2,3', 'Remediation': []},
'1.4': {'section': 'Identity and Access Management', 'recommendation_#': '1.4', 'Title': 'Ensure IAM password policy requires minimum length of 14 or greater', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.1', '5.2'], 'CCCS Guard Rail': '2,3', 'Remediation': []},
'1.5': {'section': 'Identity and Access Management', 'recommendation_#': '1.5', 'Title': 'Ensure IAM password policy expires passwords within 365 days', 'Status': None, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.1', '5.2'], 'CCCS Guard Rail': '2,3', 'Remediation': []},
'1.6': {'section': 'Identity and Access Management', 'recommendation_#': '1.6', 'Title': 'Ensure IAM password policy prevents password reuse', 'Status': None, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['5.2'], 'CCCS Guard Rail': '2,3', 'Remediation': []},
'1.7': {'section': 'Identity and Access Management', 'recommendation_#': '1.7', 'Title': 'Ensure MFA is enabled for all users with a console password', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['6.3', '6.5'], 'CCCS Guard Rail': '1,2,3,4', 'Remediation': []},
'1.8': {'section': 'Identity and Access Management', 'recommendation_#': '1.8', 'Title': 'Ensure user API keys rotate within 90 days or less', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.1', '4.4'], 'CCCS Guard Rail': '6,7', 'Remediation': []},
'1.9': {'section': 'Identity and Access Management', 'recommendation_#': '1.9', 'Title': 'Ensure user customer secret keys rotate within 90 days or less', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.1', '5.2'], 'CCCS Guard Rail': '6,7', 'Remediation': []},
'1.10': {'section': 'Identity and Access Management', 'recommendation_#': '1.10', 'Title': 'Ensure user auth tokens rotate within 90 days or less', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.1', '5.2'], 'CCCS Guard Rail': '6,7', 'Remediation': []},
'1.11': {'section': 'Identity and Access Management', 'recommendation_#': '1.11', 'Title': 'Ensure API keys are not created for tenancy administrator users', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['5.4'], 'CCCS Guard Rail': '6,7', 'Remediation': []},
'1.12': {'section': 'Identity and Access Management', 'recommendation_#': '1.12', 'Title': 'Ensure all OCI IAM user accounts have a valid and current email address', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['5.1'], 'CCCS Guard Rail': '1,2,3', 'Remediation': []},
'1.13': {'section': 'Identity and Access Management', 'recommendation_#': '1.13', 'Title': 'Ensure Dynamic Groups are used for OCI instances, OCI Cloud Databases and OCI Function to access OCI resources', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['6.8'], 'CCCS Guard Rail': '6,7', 'Remediation': []},
'1.14': {'section': 'Identity and Access Management', 'recommendation_#': '1.14', 'Title': 'Ensure storage service-level admins cannot delete resources they manage', 'Status': None, 'Level': 2, 'Total': [], 'Findings': [], 'CISv8': ['5.4', '6.8'], 'CCCS Guard Rail': '2,3', 'Remediation': []},
'2.1': {'section': 'Networking', 'recommendation_#': '2.1', 'Title': 'Ensure no security lists allow ingress from 0.0.0.0/0 to port 22.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.4', '12.3'], 'CCCS Guard Rail': '2,3,5,7,9', 'Remediation': []},
'2.2': {'section': 'Networking', 'recommendation_#': '2.2', 'Title': 'Ensure no security lists allow ingress from 0.0.0.0/0 to port 3389.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.4', '12.3'], 'CCCS Guard Rail': '2,3,5,7,9', 'Remediation': []},
'2.3': {'section': 'Networking', 'recommendation_#': '2.3', 'Title': 'Ensure no network security groups allow ingress from 0.0.0.0/0 to port 22.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.4', '12.3'], 'CCCS Guard Rail': '2,3,5,7,9', 'Remediation': []},
'2.4': {'section': 'Networking', 'recommendation_#': '2.4', 'Title': 'Ensure no network security groups allow ingress from 0.0.0.0/0 to port 3389.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.4', '12.3'], 'CCCS Guard Rail': '2,3,5,7,9', 'Remediation': []},
'2.5': {'section': 'Networking', 'recommendation_#': '2.5', 'Title': 'Ensure the default security list of every VCN restricts all traffic except ICMP.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['12.3'], 'CCCS Guard Rail': '2,3,5,7,9', 'Remediation': []},
'2.6': {'section': 'Networking', 'recommendation_#': '2.6', 'Title': 'Ensure Oracle Integration Cloud (OIC) access is restricted to allowed sources.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.4', '12.3'], 'CCCS Guard Rail': '2,3,5,7,9', 'Remediation': []},
'2.7': {'section': 'Networking', 'recommendation_#': '2.7', 'Title': 'Ensure Oracle Analytics Cloud (OAC) access is restricted to allowed sources or deployed within a Virtual Cloud Network.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.4', '12.3'], 'CCCS Guard Rail': '2,3,5,7,9', 'Remediation': []},
'2.8': {'section': 'Networking', 'recommendation_#': '2.8', 'Title': 'Ensure Oracle Autonomous Shared Database (ADB) access is restricted or deployed within a VCN.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.4', '12.3'], 'CCCS Guard Rail': '2,3,5,7,9', 'Remediation': []},
'3.1': {'section': 'Logging and Monitoring', 'recommendation_#': '3.1', 'Title': 'Ensure audit log retention period is set to 365 days.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['8.10'], 'CCCS Guard Rail': '11', 'Remediation': []},
'3.2': {'section': 'Logging and Monitoring', 'recommendation_#': '3.2', 'Title': 'Ensure default tags are used on resources.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['1.1'], 'CCCS Guard Rail': '', 'Remediation': []},
'3.3': {'section': 'Logging and Monitoring', 'recommendation_#': '3.3', 'Title': 'Create at least one notification topic and subscription to receive monitoring alerts.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['8.2', '8.11'], 'CCCS Guard Rail': '11', 'Remediation': []},
'3.4': {'section': 'Logging and Monitoring', 'recommendation_#': '3.4', 'Title': 'Ensure a notification is configured for Identity Provider changes.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
'3.5': {'section': 'Logging and Monitoring', 'recommendation_#': '3.5', 'Title': 'Ensure a notification is configured for IdP group mapping changes.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
'3.6': {'section': 'Logging and Monitoring', 'recommendation_#': '3.6', 'Title': 'Ensure a notification is configured for IAM group changes.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
'3.7': {'section': 'Logging and Monitoring', 'recommendation_#': '3.7', 'Title': 'Ensure a notification is configured for IAM policy changes.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
'3.8': {'section': 'Logging and Monitoring', 'recommendation_#': '3.8', 'Title': 'Ensure a notification is configured for user changes.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
'3.9': {'section': 'Logging and Monitoring', 'recommendation_#': '3.9', 'Title': 'Ensure a notification is configured for VCN changes.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
'3.10': {'section': 'Logging and Monitoring', 'recommendation_#': '3.10', 'Title': 'Ensure a notification is configured for changes to route tables.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
'3.11': {'section': 'Logging and Monitoring', 'recommendation_#': '3.11', 'Title': 'Ensure a notification is configured for security list changes.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
'3.12': {'section': 'Logging and Monitoring', 'recommendation_#': '3.12', 'Title': 'Ensure a notification is configured for network security group changes.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
'3.13': {'section': 'Logging and Monitoring', 'recommendation_#': '3.13', 'Title': 'Ensure a notification is configured for changes to network gateways.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
'3.14': {'section': 'Logging and Monitoring', 'recommendation_#': '3.14', 'Title': 'Ensure VCN flow logging is enabled for all subnets.', 'Status': True, 'Level': 2, 'Total': [], 'Findings': [], 'CISv8': ['8.2', '8.5', '13.6'], 'CCCS Guard Rail': '', 'Remediation': []},
'3.15': {'section': 'Logging and Monitoring', 'recommendation_#': '3.15', 'Title': 'Ensure Cloud Guard is enabled in the root compartment of the tenancy.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['8.2', '8.5', '8.11'], 'CCCS Guard Rail': '1,2,3', 'Remediation': []},
'3.16': {'section': 'Logging and Monitoring', 'recommendation_#': '3.16', 'Title': 'Ensure customer created Customer Managed Key (CMK) is rotated at least annually.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': [], 'CCCS Guard Rail': '6,7', 'Remediation': []},
'3.17': {'section': 'Logging and Monitoring', 'recommendation_#': '3.17', 'Title': 'Ensure write level Object Storage logging is enabled for all buckets.', 'Status': True, 'Level': 2, 'Total': [], 'Findings': [], 'CISv8': ['8.2'], 'CCCS Guard Rail': '', 'Remediation': []},
'4.1.1': {'section': 'Storage - Object Storage', 'recommendation_#': '4.1.1', 'Title': 'Ensure no Object Storage buckets are publicly visible.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['3.3'], 'CCCS Guard Rail': '', 'Remediation': []},
'4.1.2': {'section': 'Storage - Object Storage', 'recommendation_#': '4.1.2', 'Title': 'Ensure Object Storage Buckets are encrypted with a Customer-Managed Key (CMK).', 'Status': True, 'Level': 2, 'Total': [], 'Findings': [], 'CISv8': ['3.11'], 'CCCS Guard Rail': '', 'Remediation': []},
'4.1.3': {'section': 'Storage - Object Storage', 'recommendation_#': '4.1.3', 'Title': 'Ensure Versioning is Enabled for Object Storage Buckets.', 'Status': True, 'Level': 2, 'Total': [], 'Findings': [], 'CISv8': ['3.11'], 'CCCS Guard Rail': '', 'Remediation': []},
'4.2.1': {'section': 'Storage - Block Volumes', 'recommendation_#': '4.2.1', 'Title': 'Ensure Block Volumes are encrypted with Customer-Managed Keys.', 'Status': True, 'Level': 2, 'Total': [], 'Findings': [], 'CISv8': ['3.11'], 'CCCS Guard Rail': ''},
'4.2.2': {'section': 'Storage - Block Volumes', 'recommendation_#': '4.2.2', 'Title': 'Ensure Boot Volumes are encrypted with Customer-Managed Key.', 'Status': True, 'Level': 2, 'Total': [], 'Findings': [], 'CISv8': ['3.11'], 'CCCS Guard Rail': ''},
'4.3.1': {'section': 'Storage - File Storage Service', 'recommendation_#': '4.3.1', 'Title': 'Ensure File Storage Systems are encrypted with Customer-Managed Keys.', 'Status': True, 'Level': 2, 'Total': [], 'Findings': [], 'CISv8': ['3.11'], 'CCCS Guard Rail': '', 'Remediation': []},
'5.1': {'section': 'Asset Management', 'recommendation_#': '5.1', 'Title': 'Create at least one compartment in your tenancy to store cloud resources.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['3.1'], 'CCCS Guard Rail': '2,3,8,12', 'Remediation': []},
'5.2': {'section': 'Asset Management', 'recommendation_#': '5.2', 'Title': 'Ensure no resources are created in the root compartment.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['3.12'], 'CCCS Guard Rail': '1,2,3', 'Remediation': []}
}
# Remediation Report
self.cis_report_data = {
"1.1": {
"Description": "To apply least-privilege security principle, one can create service-level administrators in corresponding groups and assigning specific users to each service-level administrative group in a tenancy. This limits administrative access in a tenancy.
It means service-level administrators can only manage resources of a specific service.
Example policies for global/tenant level service-administrators\n
\nAllow group VolumeAdmins to manage volume-family in tenancy\nAllow group ComputeAdmins to manage instance-family in tenancy\nAllow group NetworkAdmins to manage virtual-network-family in tenancy\n
\nOrganizations have various ways of defining service-administrators. Some may prefer creating service administrators at a tenant level and some per department or per project or even per application environment (dev/test/production etc.). Either approach works so long as the policies are written to limit access given to the service-administrators.
Example policies for compartment level service-administrators
Allow group NonProdComputeAdmins to manage instance-family in compartment dev\nAllow group ProdComputeAdmins to manage instance-family in compartment production\nAllow group A-Admins to manage instance-family in compartment Project-A\nAllow group A-Admins to manage volume-family in compartment Project-A\n
",
"Rationale": "Creating service-level administrators helps in tightly controlling access to Oracle Cloud Infrastructure (OCI) services to implement the least-privileged security principle.",
"Impact": "",
"Remediation": "Refer to the policy syntax document and create new policies if the audit results indicate that the required policies are missing.",
"Recommendation": "",
"Observation": "custom IAM policy that grants tenancy administrative access."},
"1.2": {
"Description": "There is a built-in OCI IAM policy enabling the Administrators group to perform any action within a tenancy. In the OCI IAM console, this policy reads:
\nAllow group Administrators to manage all-resources in tenancy\n
Administrators create more users, groups, and policies to provide appropriate access to other groups.
Administrators should not allow any-other-group full access to the tenancy by writing a policy like this:
\nAllow group any-other-group to manage all-resources in tenancy\n
The access should be narrowed down to ensure the least-privileged principle is applied.",
"Rationale": "Permission to manage all resources in a tenancy should be limited to a small number of users in the 'Administrators' group for break-glass situations and to set up users/groups/policies when a tenancy is created.
No group other than 'Administrators' in a tenancy should need access to all resources in a tenancy, as this violates the enforcement of the least privilege principle.",
"Impact": "",
"Remediation": "Remove any policy statement that allows any group other than Administrators or any service access to manage all resources in the tenancy.",
"Recommendation": "Evaluate if tenancy-wide administrative access is needed for the identified policy and update it to be more restrictive.",
"Observation": "custom IAM policy that grants tenancy administrative access."},
"1.3": {
"Description": "Tenancy administrators can create more users, groups, and policies to provide other service administrators access to OCI resources.
For example, an IAM administrator will need to have access to manage\n resources like compartments, users, groups, dynamic-groups, policies, identity-providers, tenancy tag-namespaces, tag-definitions in the tenancy.
The policy that gives IAM-Administrators or any other group full access to 'groups' resources should not allow access to the tenancy 'Administrators' group.
The policy statements would look like:
\nAllow group IAMAdmins to inspect users in tenancy\nAllow group IAMAdmins to use users in tenancy where target.group.name != 'Administrators'\nAllow group IAMAdmins to inspect groups in tenancy\nAllow group IAMAdmins to use groups in tenancy where target.group.name != 'Administrators'\n
Note: You must include separate statements for 'inspect' access, because the target.group.name variable is not used by the ListUsers and ListGroups operations",
"Rationale": "These policy statements ensure that no other group can manage tenancy administrator users or the membership to the 'Administrators' group thereby gain or remove tenancy administrator access.",
"Impact": "",
"Remediation": "Verify the results to ensure that the policy statements that grant access to use or manage users or groups in the tenancy have a condition that excludes access to Administrators group or to users in the Administrators group.",
"Recommendation": "Evaluate if tenancy-wide administrative access is needed for the identified policy and update it to be more restrictive.",
"Observation": "custom IAM policy that grants tenancy administrative access."},
"1.4": {
"Description": "Password policies are used to enforce password complexity requirements. IAM password policies can be used to ensure password are at least a certain length and are composed of certain characters.
It is recommended the password policy require a minimum password length 14 characters and contain 1 non-alphabetic\ncharacter (Number or 'Special Character').",
"Rationale": "In keeping with the overall goal of having users create a password that is not overly weak, an eight-character minimum password length is recommended for an MFA account, and 14 characters for a password only account. In addition, maximum password length should be made as long as possible based on system/software capabilities and not restricted by policy.
In general, it is true that longer passwords are better (harder to crack), but it is also true that forced password length requirements can cause user behavior that is predictable and undesirable. For example, requiring users to have a minimum 16-character password may cause them to choose repeating patterns like fourfourfourfour or passwordpassword that meet the requirement but aren't hard to guess. Additionally, length requirements increase the chances that users will adopt other insecure practices, like writing them down, re-using them or storing them unencrypted in their documents.
Password composition requirements are a poor defense against guessing attacks. Forcing users to choose some combination of upper-case, lower-case, numbers, and special characters has a negative impact. It places an extra burden on users and many\nwill use predictable patterns (for example, a capital letter in the first position, followed by lowercase letters, then one or two numbers, and a “special character” at the end). Attackers know this, so dictionary attacks will often contain these common patterns and use the most common substitutions like, $ for s, @ for a, 1 for l, 0 for o.
Passwords that are too complex in nature make it harder for users to remember, leading to bad practices. In addition, composition requirements provide no defense against common attack types such as social engineering or insecure storage of passwords.",
"Impact": "",
"Remediation": "Update the password policy such as minimum length to 14, password must contain expected special characters and numeric characters.",
"Recommendation": "It is recommended the password policy require a minimum password length 14 characters and contain 1 non-alphabetic character (Number or 'Special Character').",
"Observation": "password policy/policies that do not enforce sufficient password complexity requirements."},
"1.5": {
"Description": "IAM password policies can require passwords to be rotated or expired after a given number of days. It is recommended that the password policy expire passwords after 365 and are changed immediately based on events.",
"Rationale": "Excessive password expiration requirements do more harm than good, because these requirements make users select predictable passwords, composed of sequential words and numbers that are closely related to each other.10 In these cases, the next password can be predicted based on the previous one (incrementing a number used in the password for example). Also, password expiration requirements offer no containment benefits because attackers will often use credentials as soon as they compromise them. Instead, immediate password changes should be based on key events including, but not\nlimited to:
1. Indication of compromise\n1. Change of user roles\n1. When a user leaves the organization.
Not only does changing passwords every few weeks or months frustrate the user, it's been suggested that it does more harm than good, because it could lead to bad practices by the user such as adding a character to the end of their existing password.
In addition, we also recommend a yearly password change. This is primarily because for all their good intentions users will share credentials across accounts. Therefore, even if a breach is publicly identified, the user may not see this notification, or forget they have an account on that site. This could leave a shared credential vulnerable indefinitely. Having an organizational policy of a 1-year (annual) password expiration is a reasonable compromise to mitigate this with minimal user burden.",
"Impact": "",
"Remediation": "Update the password policy by setting number of days configured in Expires after to 365.",
"Recommendation": "Evaluate password rotation policies are inline with your organizational standard.",
"Observation": "password policy/policies that do not require rotation."},
"1.6": {
"Description": "IAM password policies can prevent the reuse of a given password by the same user. It is recommended the password policy prevent the reuse of passwords.",
"Rationale": "Enforcing password history ensures that passwords are not reused in for a certain period of time by the same user. If a user is not allowed to use last 24 passwords, that window of time is greater. This helps maintain the effectiveness of password security.",
"Impact": "",
"Remediation": "Update the number of remembered passwords in previous passwords remembered setting to 24 in the password policy.",
"Recommendation": "Evaluate password reuse policies are inline with your organizational standard.",
"Observation": "password policy/policies that do not prevent reuse."},
"1.7": {
"Description": "Multi-factor authentication is a method of authentication that requires the use of more than one factor to verify a user's identity.
With MFA enabled in the IAM service, when a user signs in to Oracle Cloud Infrastructure, they are prompted for their user name and password, which is the first factor (something that they know). The user is then prompted to provide a second verification code from a registered MFA device, which is the second factor (something that they have). The two factors work together, requiring an extra layer of security to verify the user's identity and complete the sign-in process.
OCI IAM supports two-factor authentication using a password (first factor) and a device that can generate a time-based one-time password (TOTP) (second factor).
See [OCI documentation](https://docs.cloud.oracle.com/en-us/iaas/Content/Identity/Tasks/usingmfa.htm) for more details.",
"Rationale": "Multi factor authentication adds an extra layer of security during the login process and makes it harder for unauthorized users to gain access to OCI resources.",
"Impact": "",
"Remediation": "Each user must enable MFA for themselves using a device they will have access to every time they sign in. An administrator cannot enable MFA for another user but can enforce MFA by identifying the list of non-complaint users, notifying them or disabling access by resetting password for non-complaint accounts.",
"Recommendation": "Evaluate if local users are required. For Break Glass accounts ensure MFA is in place.",
"Observation": "users with Password access but not MFA."},
"1.8": {
"Description": "API keys are used by administrators, developers, services and scripts for accessing OCI APIs directly or via SDKs/OCI CLI to search, create, update or delete OCI resources.
The API key is an RSA key pair. The private key is used for signing the API requests and the public key is associated with a local or synchronized user's profile.",
"Rationale": "It is important to secure and rotate an API key every 90 days or less as it provides the same level of access that a user it is associated with has.
In addition to a security engineering best practice, this is also a compliance requirement. For example, PCI-DSS Section 3.6.4 states, \"Verify that key-management procedures include a defined cryptoperiod for each key type in use and define a process for key changes at the end of the defined crypto period(s).\"",
"Impact": "",
"Remediation": "Delete any API Keys with a date of 90 days or older under the Created column of the API Key table.",
"Recommendation": "Evaluate if APIs Keys are still used/required and rotate API Keys It is important to secure and rotate an API key every 90 days or less as it provides the same level of access that a user it is associated with has.",
"Observation": "user(s) with APIs that have not been rotated with 90 days."},
"1.9": {
"Description": "Object Storage provides an API to enable interoperability with Amazon S3. To use this Amazon S3 Compatibility API, you need to generate the signing key required to authenticate with Amazon S3.
This special signing key is an Access Key/Secret Key pair. Oracle generates the Customer Secret key to pair with the Access Key.",
"Rationale": "It is important to secure and rotate an customer secret key every 90 days or less as it provides the same level of object storage access that a user is associated with has.",
"Impact": "",
"Remediation": "Delete any Access Keys with a date of 90 days or older under the Created column of the Customer Secret Keys.",
"Recommendation": "Evaluate if Customer Secret Keys are still used/required and rotate the Keys accordingly.",
"Observation": "users with Customer Secret Keys that have not been rotated with 90 days."},
"1.10": {
"Description": "Auth tokens are authentication tokens generated by Oracle. You use auth tokens to authenticate with APIs that do not support the Oracle Cloud Infrastructure signature-based authentication. If the service requires an auth token, the service-specific documentation instructs you to generate one and how to use it.",
"Rationale": "It is important to secure and rotate an auth token every 90 days or less as it provides the same level of access to APIs that do not support the OCI signature-based authentication as the user associated to it.",
"Impact": "",
"Remediation": "Delete any auth token with a date of 90 days or older under the Created column of the Auth Tokens.",
"Recommendation": "Evaluate if Auth Tokens are still used/required and rotate Auth tokens.",
"Observation": "user(s) with auth tokens that have not been rotated in 90 days."},
"1.11": {
"Description": "Tenancy administrator users have full access to the organization's OCI tenancy. API keys associated with user accounts are used for invoking the OCI APIs via custom programs or clients like CLI/SDKs. The clients are typically used for performing day-to-day operations and should never require full tenancy access. Service-level administrative users with API keys should be used instead.",
"Rationale": "For performing day-to-day operations tenancy administrator access is not needed.\nService-level administrative users with API keys should be used to apply privileged security principle.",
"Impact": "",
"Remediation": "For each tenancy administrator user who has an API key,select API Keys from the menu and delete any associated keys from the API Keys table.",
"Recommendation": "Evaluate if a user with API Keys requires Administrator access and use a least privilege approach.",
"Observation": "users with Administrator access and API Keys."},
"1.12": {
"Description": "All OCI IAM local user accounts have an email address field associated with the account. It is recommended to specify an email address that is valid and current.
If you have an email address in your user profile, you can use the Forgot Password link on the sign on page to have a temporary password sent to you.",
"Rationale": "Having a valid and current email address associated with an OCI IAM local user account allows you to tie the account to identity in your organization. It also allows that user to reset their password if it is forgotten or lost.",
"Impact": "",
"Remediation": "Update the current email address in the email text box on exch non compliant user.",
"Recommendation": "Add emails to users to allow them to use the 'Forgot Password' feature and uniquely identify the user. For service accounts it could be a mail alias.",
"Observation": "without an email."},
"1.13": {
"Description": "OCI instances, OCI database and OCI functions can access other OCI resources either via an OCI API key associated to a user or by being including in a Dynamic Group that has an IAM policy granting it the required access. Access to OCI Resources refers to making API calls to another OCI resource like Object Storage, OCI Vaults, etc.",
"Rationale": "Dynamic Groups reduces the risks related to hard coded credentials. Hard coded API keys can be shared and require rotation which can open them up to being compromised. Compromised credentials could allow access to OCI services outside of the expected radius.",
"Impact": "For an OCI instance that contains embedded credential audit the scripts and environment variables to ensure that none of them contain OCI API Keys or credentials.",
"Remediation": "Create Dynamic group and Enter Matching Rules to that includes the instances accessing your OCI resources. Refer:\"https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/managingdynamicgroups.htm\".",
"Recommendation": "Evaluate how your instances, functions, and autonomous database interact with other OCI services.",
"Observation": "Dynamic Groups reduces the risks related to hard coded credentials. Hard coded API keys can be shared and require rotation which can open them up to being compromised. Compromised credentials could allow access to OCI services outside of the expected radius."},
"1.14": {
"Description": "To apply the separation of duties security principle, one can restrict service-level administrators from being able to delete resources they are managing. It means service-level administrators can only manage resources of a specific service but not delete resources for that specific service.
Example policies for global/tenant level for block volume service-administrators:\n
\nAllow group VolumeUsers to manage volumes in tenancy where request.permission!='VOLUME_DELETE'\nAllow group VolumeUsers to manage volume-backups in tenancy where request.permission!='VOLUME_BACKUP_DELETE'\n
Example policies for global/tenant level for file storage system service-administrators:
\nAllow group FileUsers to manage file-systems in tenancy where request.permission!='FILE_SYSTEM_DELETE'\nAllow group FileUsers to manage mount-targets in tenancy where request.permission!='MOUNT_TARGET_DELETE'\nAllow group FileUsers to manage export-sets in tenancy where request.permission!='EXPORT_SET_DELETE'\n
Example policies for global/tenant level for object storage system service-administrators:
\nAllow group BucketUsers to manage objects in tenancy where request.permission!='OBJECT_DELETE'\nAllow group BucketUsers to manage buckets in tenancy where request.permission!='BUCKET_DELETE'\n
",
"Rationale": "Creating service-level administrators without the ability to delete the resource they are managing helps in tightly controlling access to Oracle Cloud Infrastructure (OCI) services by implementing the separation of duties security principle.", "Impact": "",
"Remediation": "Add the appropriate where condition to any policy statement that allows the storage service-level to manage the storage service.",
"Recommendation": "To apply a separation of duties security principle, it is recommended to restrict service-level administrators from being able to delete resources they are managing.",
"Observation": "IAM Policies that give service administrator the ability to delete service resources."},
"2.1": {
"Description": "Security lists provide stateful or stateless filtering of ingress/egress network traffic to OCI resources on a subnet level. It is recommended that no security group allows unrestricted ingress access to port 22.",
"Rationale": "Removing unfettered connectivity to remote console services, such as Secure Shell (SSH), reduces a server's exposure to risk.",
"Impact": "For updating an existing environment, care should be taken to ensure that administrators currently relying on an existing ingress from 0.0.0.0/0 have access to ports 22 and/or 3389 through another network security group or security list.",
"Remediation": "For each security list in the returned results, click the security list name. Either edit the ingress rule to be more restrictive, delete the ingress rule or click on the VCN and terminate the security list as appropriate.",
"Recommendation": "Review the security lists. If they are not used(attached to a subnet) they should be deleted if possible or empty. For attached security lists it is recommended to restrict the CIDR block to only allow access to Port 22 from known networks.",
"Observation": "Security lists that allow internet access to port 22. (Note this does not necessarily mean external traffic can reach a compute instance)."},
"2.2": {
"Description": "Security lists provide stateful or stateless filtering of ingress/egress network traffic to OCI resources on a subnet level. It is recommended that no security group allows unrestricted ingress access to port 3389.",
"Rationale": "Removing unfettered connectivity to remote console services, such as Remote Desktop Protocol (RDP), reduces a server's exposure to risk.",
"Impact": "For updating an existing environment, care should be taken to ensure that administrators currently relying on an existing ingress from 0.0.0.0/0 have access to ports 22 and/or 3389 through another network security group or security list.",
"Remediation": "For each security list in the returned results, click the security list name. Either edit the ingress rule to be more restrictive, delete the ingress rule or click on the VCN and terminate the security list as appropriate.",
"Recommendation": "Review the security lists. If they are not used(attached to a subnet) they should be deleted if possible or empty. For attached security lists it is recommended to restrict the CIDR block to only allow access to Port 3389 from known networks.",
"Observation": "Security lists that allow internet access to port 3389. (Note this does not necessarily mean external traffic can reach a compute instance)."
},
"2.3": {
"Description": "Network security groups provide stateful filtering of ingress/egress network traffic to OCI resources. It is recommended that no security group allows unrestricted ingress access to port 22.",
"Rationale": "Removing unfettered connectivity to remote console services, such as Secure Shell (SSH), reduces a server's exposure to risk.",
"Impact": "For updating an existing environment, care should be taken to ensure that administrators currently relying on an existing ingress from 0.0.0.0/0 have access to ports 22 and/or 3389 through another network security group or security list.",
"Remediation": "Using the details returned from the audit procedure either Remove the security rules or Update the security rules.",
"Recommendation": "Review the network security groups. If they are not used(attached to a subnet) they should be deleted if possible or empty. For attached security lists it is recommended to restrict the CIDR block to only allow access to Port 3389 from known networks.",
"Observation": "Network security groups that allow internet access to port 22. (Note this does not necessarily mean external traffic can reach a compute instance)."
},
"2.4": {
"Description": "Network security groups provide stateful filtering of ingress/egress network traffic to OCI resources. It is recommended that no security group allows unrestricted ingress access to port 3389.",
"Rationale": "Removing unfettered connectivity to remote console services, such as Remote Desktop Protocol (RDP), reduces a server's exposure to risk.",
"Impact": "For updating an existing environment, care should be taken to ensure that administrators currently relying on an existing ingress from 0.0.0.0/0 have access to ports 22 and/or 3389 through another network security group or security list.",
"Remediation": "Using the details returned from the audit procedure either Remove the security rules or Update the security rules.",
"Recommendation": "Review the network security groups. If they are not used(attached to a subnet) they should be deleted if possible or empty. For attached network security groups it is recommended to restrict the CIDR block to only allow access to Port 3389 from known networks.",
"Observation": "Network security groups that allow internet access to port 3389. (Note this does not necessarily mean external traffic can reach a compute instance)."
},
"2.5": {
"Description": "A default security list is created when a Virtual Cloud Network (VCN) is created. Security lists provide stateful filtering of ingress and egress network traffic to OCI resources. It is recommended no security list allows unrestricted ingress access to Secure Shell (SSH) via port 22.",
"Rationale": "Removing unfettered connectivity to remote console services, such as SSH on port 22, reduces a server's exposure to unauthorized access.",
"Impact": "For updating an existing environment, care should be taken to ensure that administrators currently relying on an existing ingress from 0.0.0.0/0 have access to ports 22 and/or 3389 through another security group.",
"Remediation": "Select Default Security List for and Remove the Ingress Rule with Source 0.0.0.0/0, IP Protocol 22 and Destination Port Range 22.",
"Recommendation": "Create specific custom security lists with workload specific rules and attach to subnets.",
"Observation": "Default Security lists that allow more traffic then ICMP."
},
"2.6": {
"Description": "Oracle Integration (OIC) is a complete, secure, but lightweight integration solution that enables you to connect your applications in the cloud. It simplifies connectivity between your applications and connects both your applications that live in the cloud and your applications that still live on premises. Oracle Integration provides secure, enterprise-grade connectivity regardless of the applications you are connecting or where they reside. OIC instances are created within an Oracle managed secure private network with each having a public endpoint. The capability to configure ingress filtering of network traffic to protect your OIC instances from unauthorized network access is included. It is recommended that network access to your OIC instances be restricted to your approved corporate IP Addresses or Virtual Cloud Networks (VCN)s.",
"Rationale": "Restricting connectivity to OIC Instances reduces an OIC instance's exposure to risk.",
"Impact": "When updating ingress filters for an existing environment, care should be taken to ensure that IP addresses and VCNs currently used by administrators, users, and services to access your OIC instances are included in the updated filters.",
"Remediation": "For each OIC instance in the returned results, select the OIC Instance name,edit the Network Access to be more restrictive.",
"Recommendation": "It is recommended that OIC Network Access is restricted to your corporate IP Addresses or VCNs for OIC Instances.",
"Observation": "OIC Instances that allow unfiltered public ingress traffic (Authentication and authorization is still required)."
},
"2.7": {
"Description": "Oracle Analytics Cloud (OAC) is a scalable and secure public cloud service that provides a full set of capabilities to explore and perform collaborative analytics for you, your workgroup, and your enterprise. OAC instances provide ingress filtering of network traffic or can be deployed with in an existing Virtual Cloud Network VCN. It is recommended that all new OAC instances be deployed within a VCN and that the Access Control Rules are restricted to your corporate IP Addresses or VCNs for existing OAC instances.",
"Rationale": "Restricting connectivity to Oracle Analytics Cloud instances reduces an OAC instance's exposure to risk.",
"Impact": "When updating ingress filters for an existing environment, care should be taken to ensure that IP addresses and VCNs currently used by administrators, users, and services to access your OAC instances are included in the updated filters. Also, these changes will temporarily bring the OAC instance offline.",
"Remediation": "For each OAC instance in the returned results, select the OAC Instance name edit the Access Control Rules by clicking +Another Rule and add rules as required.",
"Recommendation": "It is recommended that all new OAC instances be deployed within a VCN and that the Access Control Rules are restricted to your corporate IP Addresses or VCNs for existing OAC instances.",
"Observation": "OAC Instances that allow unfiltered public ingress traffic (Authentication and authorization is still required)."
},
"2.8": {
"Description": "Oracle Autonomous Database Shared (ADB-S) automates database tuning, security, backups, updates, and other routine management tasks traditionally performed by DBAs. ADB-S provide ingress filtering of network traffic or can be deployed within an existing Virtual Cloud Network (VCN). It is recommended that all new ADB-S databases be deployed within a VCN and that the Access Control Rules are restricted to your corporate IP Addresses or VCNs for existing ADB-S databases.",
"Rationale": "Restricting connectivity to ADB-S Databases reduces an ADB-S database's exposure to risk.",
"Impact": "When updating ingress filters for an existing environment, care should be taken to ensure that IP addresses and VCNs currently used by administrators, users, and services to access your ADB-S instances are included in the updated filters.",
"Remediation": "For each ADB-S database in the returned results, select the ADB-S database name edit the Access Control Rules by clicking +Another Rule and add rules as required.",
"Recommendation": "It is recommended that all new ADB-S databases be deployed within a VCN and that the Access Control Rules are restricted to your corporate IP Addresses or VCNs for existing ADB-S databases.",
"Observation": "ADB-S Instances that allow unfiltered public ingress traffic (Authentication and authorization is still required)."
},
"3.1": {
"Description": "Ensuring audit logs are kept for 365 days.",
"Rationale": "Log retention controls how long activity logs should be retained. Studies have shown that The Mean Time to Detect (MTTD) a cyber breach is anywhere from 30 days in some sectors to up to 206 days in others. Retaining logs for at least 365 days or more will provide the ability to respond to incidents.",
"Impact": "There is no performance impact when enabling the above described features but additional audit data will be retained.",
"Remediation": "Go to the Tenancy Details page and edit Audit Retention Policy by setting AUDIT RETENTION PERIOD to 365.",
"Recommendation": "",
"Observation": ""
},
"3.2": {
"Description": "Using default tags is a way to ensure all resources that support tags are tagged during creation. Tags can be based on static values or based on computed values. It is recommended to setup default tags early on to ensure all created resources will get tagged.\nTags are scoped to Compartments and are inherited by Child Compartments. The recommendation is to create default tags like “CreatedBy” at the Root Compartment level to ensure all resources get tagged.\nWhen using Tags it is important to ensure that Tag Namespaces are protected by IAM Policies otherwise this will allow users to change tags or tag values.\nDepending on the age of the OCI Tenancy there may already be Tag defaults setup at the Root Level and no need for further action to implement this action.",
"Rationale": "In the case of an incident having default tags like “CreatedBy” applied will provide info on who created the resource without having to search the Audit logs.",
"Impact": "There is no performance impact when enabling the above described features",
"Remediation": "Update the root compartments tag default link.In the Tag Defaults table verify that there is a Tag with a value of \"${iam.principal.names}\" and a Tag Key Status of Active. Also cretae a Tag key definition by providing a Tag Key, Description and selecting 'Static Value' for Tag Value Type.",
"Recommendation": "",
"Observation": ""
},
"3.3": {
"Description": "Notifications provide a multi-channel messaging service that allow users and applications to be notified of events of interest occurring within OCI. Messages can be sent via eMail, HTTPs, PagerDuty, Slack or the OCI Function service. Some channels, such as eMail require confirmation of the subscription before it becomes active.",
"Rationale": "Creating one or more notification topics allow administrators to be notified of relevant changes made to OCI infrastructure.",
"Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
"Remediation": "Create a Topic in the notifications service under the appropriate compartment and add the subscriptions with current email address and correct protocol.",
"Recommendation": "",
"Observation": ""
},
"3.4": {
"Description": "It is recommended to setup an Event Rule and Notification that gets triggered when Identity Providers are created, updated or deleted. Event Rules are compartment scoped and will detect events in child compartments. It is recommended to create the Event rule at the root compartment level.",
"Rationale": "OCI Identity Providers allow management of User ID / passwords in external systems and use of those credentials to access OCI resources. Identity Providers allow users to single sign-on to OCI console and have other OCI credentials like API Keys.\nMonitoring and alerting on changes to Identity Providers will help in identifying changes to the security posture.",
"Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
"Remediation": "Create a Rule Condition in the Events services by selecting Identity in the Service Name Drop-down and selecting Identity Provider – Create, Identity Provider - Delete and Identity Provider – Update. In the Actions section select Notifications as Action Type and selct the compartment and topic to be used.",
"Recommendation": "",
"Observation": ""
},
"3.5": {
"Description": "It is recommended to setup an Event Rule and Notification that gets triggered when Identity Provider Group Mappings are created, updated or deleted. Event Rules are compartment scoped and will detect events in child compartments. It is recommended to create the Event rule at the root compartment level",
"Rationale": "IAM Policies govern access to all resources within an OCI Tenancy. IAM Policies use OCI Groups for assigning the privileges. Identity Provider Groups could be mapped to OCI Groups to assign privileges to federated users in OCI. Monitoring and alerting on changes to Identity Provider Group mappings will help in identifying changes to the security posture.",
"Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
"Remediation": "Find and click the Rule that handles Idp Group Mapping Changes. Click the Edit Rule button and verify that the RuleConditions section contains a condition for the Service Identity and Event Types: Idp Group Mapping – Create, Idp Group Mapping – Delete, and Idp Group Mapping – Update and confirm Action Type contains: Notifications and that a valid Topic is referenced.",
"Recommendation": "",
"Observation": ""
},
"3.6": {
"Description": "It is recommended to setup an Event Rule and Notification that gets triggered when IAM Groups are created, updated or deleted. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.",
"Rationale": "IAM Groups control access to all resources within an OCI Tenancy.\n Monitoring and alerting on changes to IAM Groups will help in identifying changes to satisfy least privilege principle.",
"Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
"Remediation": "Create a Rule Condition by selecting Identity in the Service Name Drop-down and selecting Group – Create, Group – Delete and Group – Update. In the Actions section select Notifications as Action Type and selct the compartment and topic to be used.",
"Recommendation": "",
"Observation": ""
},
"3.7": {
"Description": "It is recommended to setup an Event Rule and Notification that gets triggered when IAM Policies are created, updated or deleted. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.",
"Rationale": "IAM Policies govern access to all resources within an OCI Tenancy.\n Monitoring and alerting on changes to IAM policies will help in identifying changes to the security posture.",
"Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
"Remediation": "Create a Rule Condition by selecting Identity in the Service Name Drop-down and selecting Policy – Change Compartment, Policy – Create, Policy - Delete and Policy – Update. In the Actions section select Notifications as Action Type and selct the compartment and topic to be used.",
"Recommendation": "",
"Observation": ""
},
"3.8": {
"Description": "It is recommended to setup an Event Rule and Notification that gets triggered when IAM Users are created, updated, deleted, capabilities updated, or state updated. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.",
"Rationale": "Users use or manage Oracle Cloud Infrastructure resources.\n Monitoring and alerting on changes to Users will help in identifying changes to the security posture.",
"Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
"Remediation": "Edit Rule that handles IAM User Changes and verify that the Rule Conditions section contains a condition for the Service Identity and Event Types: User – Create, User – Delete, User – Update, User Capabilities – Update, User State – Update.",
"Recommendation": "",
"Observation": ""
},
"3.9": {
"Description": "It is recommended to setup an Event Rule and Notification that gets triggered when Virtual Cloud Networks are created, updated or deleted. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.",
"Rationale": "Virtual Cloud Networks (VCNs) closely resembles a traditional network.\n Monitoring and alerting on changes to VCNs will help in identifying changes to the security posture.",
"Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
"Remediation": "Edit Rule that handles VCN Changes and verify that the RuleConditions section contains a condition for the Service Networking and Event Types: VCN – Create, VCN - Delete, and VCN – Update.",
"Recommendation": "",
"Observation": ""
},
"3.10": {
"Description": "It is recommended to setup an Event Rule and Notification that gets triggered when route tables are created, updated or deleted. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.",
"Rationale": "Route tables control traffic flowing to or from Virtual Cloud Networks and Subnets.\n Monitoring and alerting on changes to route tables will help in identifying changes these traffic flows.",
"Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
"Remediation": "Edit Rule that handles Route Table Changes and verify that the RuleConditions section contains a condition for the Service Networking and Event Types: Route Table – Change Compartment, Route Table – Create, Route Table - Delete, and Route Table – Update.",
"Recommendation": "",
"Observation": ""
},
"3.11": {
"Description": "It is recommended to setup an Event Rule and Notification that gets triggered when security lists are created, updated or deleted. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.",
"Rationale": "Security Lists control traffic flowing into and out of Subnets within a Virtual Cloud Network.\n Monitoring and alerting on changes to Security Lists will help in identifying changes to these security controls.",
"Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
"Remediation": "Edit Rule that handles Security List Changes and verify that the RuleConditions section contains a condition for the Service Networking and Event Types: Security List – Change Compartment, Security List – Create, Security List - Delete, and Security List – Update.",
"Recommendation": "",
"Observation": ""
},
"3.12": {
"Description": "It is recommended to setup an Event Rule and Notification that gets triggered when network security groups are created, updated or deleted. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.",
"Rationale": "Network Security Groups control traffic flowing between Virtual Network Cards attached to Compute instances.\n Monitoring and alerting on changes to Network Security Groups will help in identifying changes these security controls.",
"Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
"Remediation": "Edit Rule that handles Network Security Group Changes and verify that the RuleConditions section contains a condition for the Service Networking and Event Types: Network Security Group – Change Compartment, Network Security Group – Create, Network Security Group - Delete, and Network Security Group – Update.",
"Recommendation": "",
"Observation": ""
},
"3.13": {
"Description": "It is recommended to setup an Event Rule and Notification that gets triggered when Network Gateways are created, updated, deleted, attached, detached, or moved. This recommendation includes Internet Gateways, Dynamic Routing Gateways, Service Gateways, Local Peering Gateways, and NAT Gateways. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.",
"Rationale": "Network Gateways act as routers between VCNs and the Internet, Oracle Services Networks, other VCNS, and on-premise networks.\n Monitoring and alerting on changes to Network Gateways will help in identifying changes to the security posture.",
"Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
"Remediation": "Edit Rule that handles Network Gateways Changes and verify that the RuleConditions section contains a condition for the Service Networking and Event Types: DRG – Create, DRG - Delete, DRG - Update, DRG Attachment – Create, DRG Attachment – Delete, DRG Attachment - Update, Internet Gateway – Create, Internet Gateway – Delete, Internet Gateway - Update, Internet Gateway – Change Compartment, Local Peering Gateway – Create, Local Peering Gateway – Delete End, Local Peering Gateway - Update, Local Peering Gateway – Change Compartment, NAT Gateway – Create, NAT Gateway – Delete, NAT Gateway - Update, NAT Gateway – Change Compartment,Compartment, Service Gateway – Create, Service Gateway – Delete Begin, Service Gateway – Delete End, Service Gateway – Update, Service Gateway – Attach Service, Service Gateway – Detach Service, Service Gateway – Change Compartment.",
"Recommendation": "",
"Observation": ""
},
"3.14": {
"Description": "VCN flow logs record details about traffic that has been accepted or rejected based on the security list rule.",
"Rationale": "Enabling VCN flow logs enables you to monitor traffic flowing within your virtual network and can be used to detect anomalous traffic.",
"Impact": "Enabling VCN flow logs will not affect the performance of your virtual network but it will generate additional use of object storage that should be controlled via object lifecycle management.
By default, VCN flow logs are stored for 30 days in object storage. Users can specify a longer retention period.",
"Remediation": "Enable Flow Logs (all records) on Virtual Cloud Networks (subnets) under the relevant resource compartment. Before hand create Log group if not exist in the Log services.",
"Recommendation": "",
"Observation": ""
},
"3.15": {
"Description": "Cloud Guard detects misconfigured resources and insecure activity within a tenancy and provides security administrators with the visibility to resolve these issues. Upon detection, Cloud Guard can suggest, assist, or take corrective actions to mitigate these issues. Cloud Guard should be enabled in the root compartment of your tenancy with the default configuration, activity detectors and responders.",
"Rationale": "Cloud Guard provides an automated means to monitor a tenancy for resources that are configured in an insecure manner as well as risky network activity from these resources.",
"Impact": "There is no performance impact when enabling the above described features, but additional IAM policies will be required.",
"Remediation": "Enable the cloud guard by selecting the services in the menu and provide appropriate reporting region and other configurations.",
"Recommendation": "",
"Observation": ""
},
"3.16": {
"Description": "Oracle Cloud Infrastructure Vault securely stores master encryption keys that protect your encrypted data. You can use the Vault service to rotate keys to generate new cryptographic material. Periodically rotating keys limits the amount of data encrypted by one key version.",
"Rationale": "Rotating keys annually limits the data encrypted under one key version. Key rotation thereby reduces the risk in case a key is ever compromised.",
"Impact": "",
"Remediation": "Select the security service and select vault. Ensure the date of each Master Encryption Key under the Created column of the Master Encryption key is no more than 365 days old.",
"Recommendation": "",
"Observation": ""
},
"3.17": {
"Description": "Object Storage write logs will log all write requests made to objects in a bucket.",
"Rationale": "Enabling an Object Storage write log, the 'requestAction' property would contain values of 'PUT', 'POST', or 'DELETE'. This will provide you more visibility into changes to objects in your buckets.",
"Impact": "There is no performance impact when enabling the above described features, but will generate additional use of object storage that should be controlled via object lifecycle management.
By default, Object Storage logs are stored for 30 days in object storage. Users can specify a longer retention period.",
"Remediation": "To the relevant bucket enable log by providing Write Access Events from the Log Category. Beforehand create log group if required.",
"Recommendation": "",
"Observation": ""
},
"4.1.1": {
"Description": "A bucket is a logical container for storing objects. It is associated with a single compartment that has policies that determine what action a user can perform on a bucket and on all the objects in the bucket. It is recommended that no bucket be publicly accessible.",
"Rationale": "Removing unfettered reading of objects in a bucket reduces an organization's exposure to data loss.",
"Impact": "For updating an existing bucket, care should be taken to ensure objects in the bucket can be accessed through either IAM policies or pre-authenticated requests.",
"Remediation": "Edit the visibility into 'private' for each Bucket.",
"Recommendation": "",
"Observation": ""
},
"4.1.2": {
"Description": "Oracle Object Storage buckets support encryption with a Customer Managed Key (CMK). By default, Object Storage buckets are encrypted with an Oracle managed key.",
"Rationale": "Encryption of Object Storage buckets with a Customer Managed Key (CMK) provides an additional level of security on your data by allowing you to manage your own encryption key lifecycle management for the bucket.",
"Impact": "Encrypting with a Customer Managed Keys requires a Vault and a Customer Master Key. In addition, you must authorize Object Storage service to use keys on your behalf.
Required Policy:\n
\nAllow service objectstorage-<region_name>, to use keys in compartment <compartment-id> where target.key.id = '<key_OCID>'
",
"Remediation": "Assign Master encryption key to Encryption key in every Object storage under Bucket name by clicking assign and select vault.",
"Recommendation": "",
"Observation": ""
},
"4.1.3": {
"Description": "A bucket is a logical container for storing objects. Object versioning is enabled at the bucket level and is disabled by default upon creation. Versioning directs Object Storage to automatically create an object version each time a new object is uploaded, an existing object is overwritten, or when an object is deleted. You can enable object versioning at bucket creation time or later.",
"Rationale": "Versioning object storage buckets provides for additional integrity of your data. Management of data integrity is critical to protecting and accessing protected data. Some customers want to identify object storage buckets without versioning in order to apply their own data lifecycle protection and management policy.",
"Impact": "",
"Remediation": "Enable Versioning by clicking on every bucket by editing the bucket configuration.",
"Recommendation": "",
"Observation": ""
},
"4.2.1": {
"Description": "Oracle Cloud Infrastructure Block Volume service lets you dynamically provision and manage block storage volumes. By default, the Oracle service manages the keys that encrypt this block volume. Block Volumes can also be encrypted using a customer managed key.",
"Rationale": "Encryption of block volumes provides an additional level of security for your data. Management of encryption keys is critical to protecting and accessing protected data. Customers should identify block volumes encrypted with Oracle service managed keys in order to determine if they want to manage the keys for certain volumes and then apply their own key lifecycle management to the selected block volumes.",
"Impact": "Encrypting with a Customer Managed Keys requires a Vault and a Customer Master Key. In addition, you must authorize the Block Volume service to use the keys you create.\nRequired IAM Policy:\n
\nAllow service blockstorage to use keys in compartment <compartment-id> where target.key.id = '<key_OCID>'\n
",
"Remediation": "For each block volumes from the result, assign the encryption key by Selecting the Vault Compartment and Vault, select the Master Encryption Key Compartment and Master Encryption key, click Assign.",
"Recommendation": "",
"Observation": ""
},
"4.2.2": {
"Description": "When you launch a virtual machine (VM) or bare metal instance based on a platform image or custom image, a new boot volume for the instance is created in the same compartment. That boot volume is associated with that instance until you terminate the instance. By default, the Oracle service manages the keys that encrypt this boot volume. Boot Volumes can also be encrypted using a customer managed key.",
"Rationale": "Encryption of boot volumes provides an additional level of security for your data. Management of encryption keys is critical to protecting and accessing protected data. Customers should identify boot volumes encrypted with Oracle service managed keys in order to determine if they want to manage the keys for certain boot volumes and then apply their own key lifecycle management to the selected boot volumes.",
"Impact": "Encrypting with a Customer Managed Keys requires a Vault and a Customer Master Key. In addition, you must authorize the Boot Volume service to use the keys you create.\nRequired IAM Policy:\n
\nAllow service Bootstorage to use keys in compartment <compartment-id> where target.key.id = '<key_OCID>'\n
",
"Remediation": "For each boot volumes from the result, assign the encryption key by Selecting the Vault Compartment and Vault, select the Master Encryption Key Compartment and Master Encryption key, click Assign.",
"Recommendation": "",
"Observation": ""
},
"4.3.1": {
"Description": "Oracle Cloud Infrastructure File Storage service (FSS) provides a durable, scalable, secure, enterprise-grade network file system. By default, the Oracle service manages the keys that encrypt FSS file systems. FSS file systems can also be encrypted using a customer managed key.",
"Rationale": "Encryption of FSS systems provides an additional level of security for your data. Management of encryption keys is critical to protecting and accessing protected data. Customers should identify FSS file systems that are encrypted with Oracle service managed keys in order to determine if they want to manage the keys for certain FSS file systems and then apply their own key lifecycle management to the selected FSS file systems.",
"Impact": "Encrypting with a Customer Managed Keys requires a Vault and a Customer Master Key. In addition, you must authorize the File Storage service to use the keys you create.\nRequired IAM Policy:\n
\nAllow service FssOc1Prod to use keys in compartment <compartment-id> where target.key.id = '<key_OCID>'\n
",
"Remediation": "For each file storage system from the result, assign the encryption key by Selecting the Vault Compartment and Vault, select the Master Encryption Key Compartment and Master Encryption key, click Assign.",
"Recommendation": "",
"Observation": ""
},
"5.1": {
"Description": "When you sign up for Oracle Cloud Infrastructure, Oracle creates your tenancy, which is the root compartment that holds all your cloud resources. You then create additional compartments within the tenancy (root compartment) and corresponding policies to control access to the resources in each compartment.
Compartments allow you to organize and control access to your cloud resources. A compartment is a collection of related resources (such as instances, databases, virtual cloud networks, block volumes) that can be accessed only by certain groups that have been given permission by an administrator.",
"Rationale": "Compartments are a logical group that adds an extra layer of isolation, organization and authorization making it harder for unauthorized users to gain access to OCI resources.",
"Impact": "Once the compartment is created an OCI IAM policy must be created to allow a group to resources in the compartment otherwise only group with tenancy access will have access.",
"Remediation": "Create the new compartment under the root compartment.",
"Recommendation": "",
"Observation": ""
},
"5.2": {
"Description": "When you create a cloud resource such as an instance, block volume, or cloud network, you must specify to which compartment you want the resource to belong. Placing resources in the root compartment makes it difficult to organize and isolate those resources.",
"Rationale": "Placing resources into a compartment will allow you to organize and have more granular access controls to your cloud resources.",
"Impact": "Placing a resource in a compartment will impact how you write policies to manage access and organize that resource.",
"Remediation": "For each item in the returned results,select Move Resource or More Actions then Move Resource and select compartment except root and choose new then move resources.",
"Recommendation": "",
"Observation": ""
}
}
# MAP Checks
self.obp_foundations_checks = {
'Cost_Tracking_Budgets': {'Status': False, 'Findings': [], 'OBP': [], "Documentation": "https://docs.oracle.com/en-us/iaas/Content/Billing/Concepts/budgetsoverview.htm#Budgets_Overview"},
'SIEM_Audit_Log_All_Comps': {'Status': True, 'Findings': [], 'OBP': [], "Documentation": "https://docs.oracle.com/en/solutions/oci-aggregate-logs-siem/index.html"}, # Assuming True
'SIEM_Audit_Incl_Sub_Comp': {'Status': True, 'Findings': [], 'OBP': [], "Documentation": "https://docs.oracle.com/en/solutions/oci-aggregate-logs-siem/index.html"}, # Assuming True
'SIEM_VCN_Flow_Logging': {'Status': None, 'Findings': [], 'OBP': [], "Documentation": "https://docs.oracle.com/en/solutions/oci-aggregate-logs-siem/index.html"},
'SIEM_Write_Bucket_Logs': {'Status': None, 'Findings': [], 'OBP': [], "Documentation": "https://docs.oracle.com/en/solutions/oci-aggregate-logs-siem/index.html"},
'SIEM_Read_Bucket_Logs': {'Status': None, 'Findings': [], 'OBP': [], "Documentation": "https://docs.oracle.com/en/solutions/oci-aggregate-logs-siem/index.html"},
'Networking_Connectivity': {'Status': True, 'Findings': [], 'OBP': [], "Documentation": "https://docs.oracle.com/en-us/iaas/Content/Network/Troubleshoot/drgredundancy.htm"},
'Cloud_Guard_Config': {'Status': None, 'Findings': [], 'OBP': [], "Documentation": ""},
}
# MAP Regional Data
self.__obp_regional_checks = {}
# CIS monitoring notifications check
self.cis_monitoring_checks = {
"3.4": [
'com.oraclecloud.identitycontrolplane.createidentityprovider',
'com.oraclecloud.identitycontrolplane.deleteidentityprovider',
'com.oraclecloud.identitycontrolplane.updateidentityprovider'
],
"3.5": [
'com.oraclecloud.identitycontrolplane.createidpgroupmapping',
'com.oraclecloud.identitycontrolplane.deleteidpgroupmapping',
'com.oraclecloud.identitycontrolplane.updateidpgroupmapping'
],
"3.6": [
'com.oraclecloud.identitycontrolplane.creategroup',
'com.oraclecloud.identitycontrolplane.deletegroup',
'com.oraclecloud.identitycontrolplane.updategroup'
],
"3.7": [
'com.oraclecloud.identitycontrolplane.createpolicy',
'com.oraclecloud.identitycontrolplane.deletepolicy',
'com.oraclecloud.identitycontrolplane.updatepolicy'
],
"3.8": [
'com.oraclecloud.identitycontrolplane.createuser',
'com.oraclecloud.identitycontrolplane.deleteuser',
'com.oraclecloud.identitycontrolplane.updateuser',
'com.oraclecloud.identitycontrolplane.updateusercapabilities',
'com.oraclecloud.identitycontrolplane.updateuserstate'
],
"3.9": [
'com.oraclecloud.virtualnetwork.createvcn',
'com.oraclecloud.virtualnetwork.deletevcn',
'com.oraclecloud.virtualnetwork.updatevcn'
],
"3.10": [
'com.oraclecloud.virtualnetwork.changeroutetablecompartment',
'com.oraclecloud.virtualnetwork.createroutetable',
'com.oraclecloud.virtualnetwork.deleteroutetable',
'com.oraclecloud.virtualnetwork.updateroutetable'
],
"3.11": [
'com.oraclecloud.virtualnetwork.changesecuritylistcompartment',
'com.oraclecloud.virtualnetwork.createsecuritylist',
'com.oraclecloud.virtualnetwork.deletesecuritylist',
'com.oraclecloud.virtualnetwork.updatesecuritylist'
],
"3.12": [
'com.oraclecloud.virtualnetwork.changenetworksecuritygroupcompartment',
'com.oraclecloud.virtualnetwork.createnetworksecuritygroup',
'com.oraclecloud.virtualnetwork.deletenetworksecuritygroup',
'com.oraclecloud.virtualnetwork.updatenetworksecuritygroup'
],
"3.13": [
'com.oraclecloud.virtualnetwork.createdrg',
'com.oraclecloud.virtualnetwork.deletedrg',
'com.oraclecloud.virtualnetwork.updatedrg',
'com.oraclecloud.virtualnetwork.createdrgattachment',
'com.oraclecloud.virtualnetwork.deletedrgattachment',
'com.oraclecloud.virtualnetwork.updatedrgattachment',
'com.oraclecloud.virtualnetwork.changeinternetgatewaycompartment',
'com.oraclecloud.virtualnetwork.createinternetgateway',
'com.oraclecloud.virtualnetwork.deleteinternetgateway',
'com.oraclecloud.virtualnetwork.updateinternetgateway',
'com.oraclecloud.virtualnetwork.changelocalpeeringgatewaycompartment',
'com.oraclecloud.virtualnetwork.createlocalpeeringgateway',
'com.oraclecloud.virtualnetwork.deletelocalpeeringgateway.end',
'com.oraclecloud.virtualnetwork.updatelocalpeeringgateway',
'com.oraclecloud.natgateway.changenatgatewaycompartment',
'com.oraclecloud.natgateway.createnatgateway',
'com.oraclecloud.natgateway.deletenatgateway',
'com.oraclecloud.natgateway.updatenatgateway',
'com.oraclecloud.servicegateway.attachserviceid',
'com.oraclecloud.servicegateway.changeservicegatewaycompartment',
'com.oraclecloud.servicegateway.createservicegateway',
'com.oraclecloud.servicegateway.deleteservicegateway.end',
'com.oraclecloud.servicegateway.detachserviceid',
'com.oraclecloud.servicegateway.updateservicegateway'
]
}
# CIS IAM check
self.cis_iam_checks = {
"1.3": {"targets": ["target.group.name!=Administrators"]},
"1.13": {"resources": ["fnfunc", "instance", "autonomousdatabase", "resource.compartment.id"]},
"1.14": {
"all-resources": [
"request.permission!=BUCKET_DELETE", "request.permission!=OBJECT_DELETE", "request.permission!=EXPORT_SET_DELETE",
"request.permission!=MOUNT_TARGET_DELETE", "request.permission!=FILE_SYSTEM_DELETE", "request.permission!=VOLUME_BACKUP_DELETE",
"request.permission!=VOLUME_DELETE", "request.permission!=FILE_SYSTEM_DELETE_SNAPSHOT"
],
"file-family": [
"request.permission!=EXPORT_SET_DELETE", "request.permission!=MOUNT_TARGET_DELETE",
"request.permission!=FILE_SYSTEM_DELETE", "request.permission!=FILE_SYSTEM_DELETE_SNAPSHOT"
],
"file-systems": ["request.permission!=FILE_SYSTEM_DELETE", "request.permission!=FILE_SYSTEM_DELETE_SNAPSHOT"],
"mount-targets": ["request.permission!=MOUNT_TARGET_DELETE"],
"object-family": ["request.permission!=BUCKET_DELETE", "request.permission!=OBJECT_DELETE"],
"buckets": ["request.permission!=BUCKET_DELETE"],
"objects": ["request.permission!=OBJECT_DELETE"],
"volume-family": ["request.permission!=VOLUME_BACKUP_DELETE", "request.permission!=VOLUME_DELETE", "request.permission!=BOOT_VOLUME_BACKUP_DELETE"],
"volumes": ["request.permission!=VOLUME_DELETE"],
"volume-backups": ["request.permission!=VOLUME_BACKUP_DELETE"],
"boot-volume-backups": ["request.permission!=BOOT_VOLUME_BACKUP_DELETE"]},
"1.14-storage-admin": {
"all-resources": [
"request.permission=BUCKET_DELETE", "request.permission=OBJECT_DELETE", "request.permission=EXPORT_SET_DELETE",
"request.permission=MOUNT_TARGET_DELETE", "request.permission=FILE_SYSTEM_DELETE", "request.permission=VOLUME_BACKUP_DELETE",
"request.permission=VOLUME_DELETE", "request.permission=FILE_SYSTEM_DELETE_SNAPSHOT"
],
"file-family": [
"request.permission=EXPORT_SET_DELETE", "request.permission=MOUNT_TARGET_DELETE",
"request.permission=FILE_SYSTEM_DELETE", "request.permission=FILE_SYSTEM_DELETE_SNAPSHOT"
],
"file-systems": ["request.permission=FILE_SYSTEM_DELETE", "request.permission=FILE_SYSTEM_DELETE_SNAPSHOT"],
"mount-targets": ["request.permission=MOUNT_TARGET_DELETE"],
"object-family": ["request.permission=BUCKET_DELETE", "request.permission=OBJECT_DELETE"],
"buckets": ["request.permission=BUCKET_DELETE"],
"objects": ["request.permission=OBJECT_DELETE"],
"volume-family": ["request.permission=VOLUME_BACKUP_DELETE", "request.permission=VOLUME_DELETE", "request.permission=BOOT_VOLUME_BACKUP_DELETE"],
"volumes": ["request.permission=VOLUME_DELETE"],
"volume-backups": ["request.permission=VOLUME_BACKUP_DELETE"],
"boot-volume-backups": ["request.permission=BOOT_VOLUME_BACKUP_DELETE"]}}
# Tenancy Data
self.__tenancy = None
self.__cloud_guard_config = None
self.__cloud_guard_config_status = None
self.__os_namespace = None
# For IAM Checks
self.__tenancy_password_policy = None
self.__compartments = []
self.__raw_compartment = []
self.__policies = []
self.__users = []
self.__groups_to_users = []
self.__tag_defaults = []
self.__dynamic_groups = []
self.__identity_domains = []
# For Networking checks
self.__network_security_groups = []
self.__network_security_lists = []
self.__network_subnets = []
self.__network_fastconnects = {} # Indexed by DRG ID
self.__network_drgs = {} # Indexed by DRG ID
self.__raw_network_drgs = []
self.__network_cpes = []
self.__network_ipsec_connections = {} # Indexed by DRG ID
self.__network_drg_attachments = {} # Indexed by DRG ID
# For Autonomous Database Checks
self.__autonomous_databases = []
# For Oracle Analytics Cloud Checks
self.__analytics_instances = []
# For Oracle Integration Cloud Checks
self.__integration_instances = []
# For Logging & Monitoring checks
self.__event_rules = []
self.__logging_list = []
self.__subnet_logs = {}
self.__write_bucket_logs = {}
self.__read_bucket_logs = {}
self.__load_balancer_access_logs = []
self.__load_balancer_error_logs = []
self.__api_gateway_access_logs = []
self.__api_gateway_error_logs = []
# Cloud Guard checks
self.__cloud_guard_targets = {}
# For Storage Checks
self.__buckets = []
self.__boot_volumes = []
self.__block_volumes = []
self.__file_storage_system = []
# For Vaults and Keys checks
self.__vaults = []
# For Region
self.__regions = {}
self.__raw_regions = []
self.__home_region = None
# For ONS Subscriptions
self.__subscriptions = []
# Results from Advanced search query
self.__resources_in_root_compartment = []
# For Budgets
self.__budgets = []
# For Service Connector
self.__service_connectors = {}
# Error Data
self.__errors = []
# Setting list of regions to run in
# Start print time info
show_version(verbose=True)
print("\nStarts at " + self.start_time_str)
self.__config = config
self.__signer = signer
# By Default it is passed True to print all output
if print_to_screen.upper() == 'TRUE':
self.__print_to_screen = True
else:
self.__print_to_screen = False
## By Default debugging is disabled by default
global DEBUG
DEBUG = debug
# creating list of regions to run
try:
if regions_to_run_in:
self.__regions_to_run_in = regions_to_run_in.split(",")
self.__run_in_all_regions = False
else:
# If no regions are passed I will run them in all
self.__regions_to_run_in = regions_to_run_in
self.__run_in_all_regions = True
print("\nRegions to run in: " + ("all regions" if self.__run_in_all_regions else str(self.__regions_to_run_in)))
except Exception:
raise RuntimeError("Invalid input regions must be comma separated with no : 'us-ashburn-1,us-phoenix-1'")
try:
self.__identity = oci.identity.IdentityClient(
self.__config, signer=self.__signer)
if proxy:
self.__identity.base_client.session.proxies = {'https': proxy}
# Getting Tenancy Data and Region data
self.__tenancy = self.__identity.get_tenancy(
config["tenancy"]).data
regions = self.__identity.list_region_subscriptions(
self.__tenancy.id).data
except Exception as e:
raise RuntimeError("Failed to get identity information." + str(e.args))
try:
#Find the budget home region to ensure the budget client is run against the home region
budget_home_region = next(
(obj.region_name for obj in regions if obj.is_home_region),None)
budget_config = self.__config.copy()
budget_config["region"] = budget_home_region
self.__budget_client = oci.budget.BudgetClient(
budget_config, signer=self.__signer)
if proxy:
self.__budget_client.base_client.session.proxies = {'https': proxy}
except Exception as e:
raise RuntimeError("Failed to get create budgets client" + str(e.args))
# Creating a record for home region and a list of all regions including the home region
for region in regions:
if region.is_home_region:
self.__home_region = region.region_name
print("Home region for tenancy is " + self.__home_region)
if self.__home_region != self.__config['region']:
print_header("It is recommended to run the CIS Complaince script in your home region")
print_header("The current region is: " + self.__config['region'])
self.__regions[region.region_name] = {
"is_home_region": region.is_home_region,
"region_key": region.region_key,
"region_name": region.region_name,
"status": region.status,
"identity_client": self.__identity,
"budget_client": self.__budget_client
}
elif region.region_name in self.__regions_to_run_in or self.__run_in_all_regions:
self.__regions[region.region_name] = {
"is_home_region": region.is_home_region,
"region_key": region.region_key,
"region_name": region.region_name,
"status": region.status,
}
record = {
"is_home_region": region.is_home_region,
"region_key": region.region_key,
"region_name": region.region_name,
"status": region.status,
}
self.__raw_regions.append(record)
# By Default it is today's date
if report_directory:
self.__report_directory = report_directory + "/"
else:
self.__report_directory = self.__tenancy.name + "-" + self.report_datetime
# Creating signers and config for all regions
self.__create_regional_signers(proxy)
# Setting os_namespace based on home region
try:
if not (self.__os_namespace):
self.__os_namespace = self.__regions[self.__home_region]['os_client'].get_namespace().data
except Exception as e:
raise RuntimeError(
"Failed to get tenancy namespace." + str(e.args))
# Determining if a need a object storage client for output
self.__output_bucket = output_bucket
if self.__output_bucket:
self.__output_bucket_client = self.__regions[self.__home_region]['os_client']
# Determining if all raw data will be output
self.__output_raw_data = raw_data
# Determining if OCI Best Practices will be checked and output
self.__obp_checks = obp
# Determining if CSV report OCIDs will be redacted
self.__redact_output = redact_output
##########################################################################
# Create regional config, signers adds appends them to self.__regions object
##########################################################################
def __create_regional_signers(self, proxy):
print("Creating regional signers and configs...")
for region_key, region_values in self.__regions.items():
# Creating regional configs and signers
region_signer = self.__signer
region_signer.region_name = region_key
region_config = self.__config
region_config['region'] = region_key
try:
identity = oci.identity.IdentityClient(region_config, signer=region_signer)
if proxy:
identity.base_client.session.proxies = {'https': proxy}
region_values['identity_client'] = identity
audit = oci.audit.AuditClient(region_config, signer=region_signer)
if proxy:
audit.base_client.session.proxies = {'https': proxy}
region_values['audit_client'] = audit
cloud_guard = oci.cloud_guard.CloudGuardClient(region_config, signer=region_signer)
if proxy:
cloud_guard.base_client.session.proxies = {'https': proxy}
region_values['cloud_guard_client'] = cloud_guard
search = oci.resource_search.ResourceSearchClient(region_config, signer=region_signer)
if proxy:
search.base_client.session.proxies = {'https': proxy}
region_values['search_client'] = search
network = oci.core.VirtualNetworkClient(region_config, signer=region_signer)
if proxy:
network.base_client.session.proxies = {'https': proxy}
region_values['network_client'] = network
events = oci.events.EventsClient(region_config, signer=region_signer)
if proxy:
events.base_client.session.proxies = {'https': proxy}
region_values['events_client'] = events
logging = oci.logging.LoggingManagementClient(region_config, signer=region_signer)
if proxy:
logging.base_client.session.proxies = {'https': proxy}
region_values['logging_client'] = logging
os_client = oci.object_storage.ObjectStorageClient(region_config, signer=region_signer)
if proxy:
os_client.base_client.session.proxies = {'https': proxy}
region_values['os_client'] = os_client
vault = oci.key_management.KmsVaultClient(region_config, signer=region_signer)
if proxy:
vault.session.proxies = {'https': proxy}
region_values['vault_client'] = vault
ons_subs = oci.ons.NotificationDataPlaneClient(region_config, signer=region_signer)
if proxy:
ons_subs.session.proxies = {'https': proxy}
region_values['ons_subs_client'] = ons_subs
adb = oci.database.DatabaseClient(region_config, signer=region_signer)
if proxy:
adb.base_client.session.proxies = {'https': proxy}
region_values['adb_client'] = adb
oac = oci.analytics.AnalyticsClient(region_config, signer=region_signer)
if proxy:
oac.base_client.session.proxies = {'https': proxy}
region_values['oac_client'] = oac
oic = oci.integration.IntegrationInstanceClient(region_config, signer=region_signer)
if proxy:
oic.base_client.session.proxies = {'https': proxy}
region_values['oic_client'] = oic
bv = oci.core.BlockstorageClient(region_config, signer=region_signer)
if proxy:
bv.base_client.session.proxies = {'https': proxy}
region_values['bv_client'] = bv
fss = oci.file_storage.FileStorageClient(region_config, signer=region_signer)
if proxy:
fss.base_client.session.proxies = {'https': proxy}
region_values['fss_client'] = fss
sch = oci.sch.ServiceConnectorClient(region_config, signer=region_signer)
if proxy:
sch.base_client.session.proxies = {'https': proxy}
region_values['sch_client'] = sch
except Exception as e:
raise RuntimeError("Failed to create regional clients for data collection: " + str(e))
##########################################################################
# Check for Managed PaaS Compartment
##########################################################################
def __if_not_managed_paas_compartment(self, name):
return name != "ManagedCompartmentForPaaS"
##########################################################################
# Set ManagementCompartment ID
##########################################################################
def __set_managed_paas_compartment(self):
self.__managed_paas_compartment_id = ""
for compartment in self.__compartments:
if compartment.name == "ManagedCompartmentForPaaS":
self.__managed_paas_compartment_id = compartment.id
##########################################################################
# Load compartments
##########################################################################
def __identity_read_compartments(self):
print("\nProcessing Compartments...")
try:
self.__compartments = oci.pagination.list_call_get_all_results(
self.__regions[self.__home_region]['identity_client'].list_compartments,
compartment_id=self.__tenancy.id,
compartment_id_in_subtree=True,
lifecycle_state="ACTIVE"
).data
# Need to convert for raw output
for compartment in self.__compartments:
deep_link = self.__oci_compartment_uri + compartment.id
record = {
'id': compartment.id,
'name': compartment.name,
"deep_link": self.__generate_csv_hyperlink(deep_link, compartment.name),
'compartment_id': compartment.compartment_id,
'defined_tags': compartment.defined_tags,
"description": compartment.description,
"freeform_tags": compartment.freeform_tags,
"inactive_status": compartment.inactive_status,
"is_accessible": compartment.is_accessible,
"lifecycle_state": compartment.lifecycle_state,
"time_created": compartment.time_created.strftime(self.__iso_time_format),
"region": ""
}
self.__raw_compartment.append(record)
self.cis_foundations_benchmark_1_2['5.1']['Total'].append(compartment)
# Add root compartment which is not part of the list_compartments
self.__compartments.append(self.__tenancy)
deep_link = self.__oci_compartment_uri + compartment.id
root_compartment = {
"id": self.__tenancy.id,
"name": self.__tenancy.name,
"deep_link": self.__generate_csv_hyperlink(deep_link, self.__tenancy.name),
"compartment_id": "(root)",
"defined_tags": self.__tenancy.defined_tags,
"description": self.__tenancy.description,
"freeform_tags": self.__tenancy.freeform_tags,
"inactive_status": "",
"is_accessible": "",
"lifecycle_state": "",
"time_created": "",
"region": ""
}
self.__raw_compartment.append(root_compartment)
self.__set_managed_paas_compartment()
print("\tProcessed " + str(len(self.__compartments)) + " Compartments")
return self.__compartments
except Exception as e:
raise RuntimeError(
"Error in identity_read_compartments: " + str(e.args))
##########################################################################
# Load Identity Domains
##########################################################################
def __identity_read_domains(self):
print("Processing Identity Domains...")
raw_identity_domains = []
# Finding all Identity Domains in the tenancy
for compartment in self.__compartments:
try:
debug("__identity_read_domains: Getting Identity Domains for Compartment :" + str(compartment.name))
raw_identity_domains += oci.pagination.list_call_get_all_results(
self.__regions[self.__home_region]['identity_client'].list_domains,
compartment_id = compartment.id,
lifecycle_state = "ACTIVE"
).data
# If this succeeds it is likely there are identity Domains
self.__identity_domains_enabled = True
except Exception as e:
debug("__identity_read_domains: Exception collecting Identity Domains \n" + str(e))
# If this fails the tenancy likely doesn't have identity domains or the permissions are off
break
# Check if tenancy has Identity Domains otherwise breaking out
if not(raw_identity_domains):
self.__identity_domains_enabled = False
return self.__identity_domains_enabled
for domain in raw_identity_domains:
debug("__identity_read_domains: Getting passowrd policy for domain: " + domain.display_name)
domain_dict = oci.util.to_dict(domain)
try:
debug("__identity_read_domains: Getting Identity Domain Password Policy")
idcs_url = domain.url + "/admin/v1/PasswordPolicies/PasswordPolicy"
raw_pwd_policy_resp = requests.get(url=idcs_url, auth=self.__signer)
raw_pwd_policy_dict = json.loads(raw_pwd_policy_resp.content)
pwd_policy_dict = oci.util.to_dict(oci.identity_domains.IdentityDomainsClient(\
config=self.__config, service_endpoint=domain.url).get_password_policy(\
password_policy_id=raw_pwd_policy_dict['ocid']).data)
domain_dict['password_policy'] = pwd_policy_dict
domain_dict['errors'] = None
except Exception as e:
debug("Identity Domains Error is " + str(e))
domain_dict['password_policy'] = None
domain_dict['errors'] = str(e)
self.__identity_domains.append(domain_dict)
else:
self.__identity_domains_enabled = True
("\tProcessed " + str(len(self.__identity_domains)) + " Identity Domains")
return self.__identity_domains_enabled
##########################################################################
# Load Groups and Group membership
##########################################################################
def __identity_read_groups_and_membership(self):
try:
# Getting all Groups in the Tenancy
groups_data = oci.pagination.list_call_get_all_results(
self.__regions[self.__home_region]['identity_client'].list_groups,
compartment_id=self.__tenancy.id
).data
# For each group in the tenacy getting the group's membership
for grp in groups_data:
membership = oci.pagination.list_call_get_all_results(
self.__regions[self.__home_region]['identity_client'].list_user_group_memberships,
compartment_id=self.__tenancy.id,
group_id=grp.id).data
# For empty groups just print one record with the group info
grp_deep_link = self.__oci_groups_uri + grp.id
if not membership:
group_record = {
"id": grp.id,
"name": grp.name,
"deep_link": self.__generate_csv_hyperlink(grp_deep_link, grp.name),
"description": grp.description,
"lifecycle_state": grp.lifecycle_state,
"time_created": grp.time_created.strftime(self.__iso_time_format),
"user_id": "",
"user_id_link": ""
}
# Adding a record per empty group
self.__groups_to_users.append(group_record)
# For groups with members print one record per user per group
for member in membership:
user_deep_link = self.__oci_users_uri + member.user_id
group_record = {
"id": grp.id,
"name": grp.name,
"deep_link": self.__generate_csv_hyperlink(grp_deep_link, grp.name),
"description": grp.description,
"lifecycle_state": grp.lifecycle_state,
"time_created": grp.time_created.strftime(self.__iso_time_format),
"user_id": member.user_id,
"user_id_link": self.__generate_csv_hyperlink(user_deep_link, member.user_id)
}
# Adding a record per user to group
self.__groups_to_users.append(group_record)
return self.__groups_to_users
except Exception as e:
RuntimeError(
"Error in __identity_read_groups_and_membership" + str(e.args))
##########################################################################
# Load users
##########################################################################
def __identity_read_users(self):
try:
# Getting all users in the Tenancy
users_data = oci.pagination.list_call_get_all_results(
self.__regions[self.__home_region]['identity_client'].list_users,
compartment_id=self.__tenancy.id
).data
# Adding record to the users
for user in users_data:
deep_link = self.__oci_users_uri + user.id
record = {
'id': user.id,
'name': user.name,
'deep_link': self.__generate_csv_hyperlink(deep_link, user.name),
'defined_tags': user.defined_tags,
'description': user.description,
'email': user.email,
'email_verified': user.email_verified,
'external_identifier': user.external_identifier,
'identity_provider_id': user.identity_provider_id,
'is_mfa_activated': user.is_mfa_activated,
'lifecycle_state': user.lifecycle_state,
'time_created': user.time_created.strftime(self.__iso_time_format),
'can_use_api_keys': user.capabilities.can_use_api_keys,
'can_use_auth_tokens': user.capabilities.can_use_auth_tokens,
'can_use_console_password': user.capabilities.can_use_console_password,
'can_use_customer_secret_keys': user.capabilities.can_use_customer_secret_keys,
'can_use_db_credentials': user.capabilities.can_use_db_credentials,
'can_use_o_auth2_client_credentials': user.capabilities.can_use_o_auth2_client_credentials,
'can_use_smtp_credentials': user.capabilities.can_use_smtp_credentials,
'groups': []
}
# Adding Groups to the user
for group in self.__groups_to_users:
if user.id == group['user_id']:
record['groups'].append(group['name'])
record['api_keys'] = self.__identity_read_user_api_key(user.id)
record['auth_tokens'] = self.__identity_read_user_auth_token(
user.id)
record['customer_secret_keys'] = self.__identity_read_user_customer_secret_key(
user.id)
self.__users.append(record)
print("\tProcessed " + str(len(self.__users)) + " Users")
return self.__users
except Exception as e:
debug("__identity_read_users: User ID is: " + str(user))
raise RuntimeError(
"Error in __identity_read_users: " + str(e.args))
##########################################################################
# Load user api keys
##########################################################################
def __identity_read_user_api_key(self, user_ocid):
api_keys = []
try:
user_api_keys_data = oci.pagination.list_call_get_all_results(
self.__regions[self.__home_region]['identity_client'].list_api_keys,
user_id=user_ocid
).data
for api_key in user_api_keys_data:
deep_link = self.__oci_users_uri + user_ocid + "/api-keys"
record = {
'id': api_key.key_id,
'fingerprint': api_key.fingerprint,
'deep_link': self.__generate_csv_hyperlink(deep_link, api_key.fingerprint),
'inactive_status': api_key.inactive_status,
'lifecycle_state': api_key.lifecycle_state,
'time_created': api_key.time_created.strftime(self.__iso_time_format),
}
api_keys.append(record)
return api_keys
except Exception as e:
self.__errors.append({"id" : user_ocid, "error" : "Failed to API Keys for User ID"})
debug("__identity_read_user_api_key: Failed to API Keys for User ID: " + user_ocid)
return api_keys
raise RuntimeError(
"Error in identity_read_user_api_key: " + str(e.args))
##########################################################################
# Load user auth tokens
##########################################################################
def __identity_read_user_auth_token(self, user_ocid):
auth_tokens = []
try:
auth_tokens_data = oci.pagination.list_call_get_all_results(
self.__regions[self.__home_region]['identity_client'].list_auth_tokens,
user_id=user_ocid
).data
for token in auth_tokens_data:
deep_link = self.__oci_users_uri + user_ocid + "/swift-credentials"
record = {
'id': token.id,
'description': token.description,
'deep_link': self.__generate_csv_hyperlink(deep_link, token.description),
'inactive_status': token.inactive_status,
'lifecycle_state': token.lifecycle_state,
# .strftime('%Y-%m-%d %H:%M:%S'),
'time_created': token.time_created.strftime(self.__iso_time_format),
'time_expires': str(token.time_expires),
'token': token.token
}
auth_tokens.append(record)
return auth_tokens
except Exception as e:
self.__errors.append({"id" : user_ocid, "error" : "Failed to auth tokens for User ID"})
debug("__identity_read_user_auth_token: Failed to auth tokens for User ID: " + user_ocid)
return auth_tokens
raise RuntimeError(
"Error in identity_read_user_auth_token: " + str(e.args))
##########################################################################
# Load user customer secret key
##########################################################################
def __identity_read_user_customer_secret_key(self, user_ocid):
customer_secret_key = []
try:
customer_secret_key_data = oci.pagination.list_call_get_all_results(
self.__regions[self.__home_region]['identity_client'].list_customer_secret_keys,
user_id=user_ocid
).data
for key in customer_secret_key_data:
deep_link = self.__oci_users_uri + user_ocid + "/secret-keys"
record = {
'id': key.id,
'display_name': key.display_name,
'deep_link': self.__generate_csv_hyperlink(deep_link, key.display_name),
'inactive_status': key.inactive_status,
'lifecycle_state': key.lifecycle_state,
'time_created': key.time_created.strftime(self.__iso_time_format),
'time_expires': str(key.time_expires),
}
customer_secret_key.append(record)
return customer_secret_key
except Exception as e:
self.__errors.append({"id" : user_ocid, "error" : "Failed to customer secrets for User ID"})
debug("__identity_read_user_customer_secret_key: Failed to customer secrets for User ID: " + user_ocid)
return customer_secret_key
raise RuntimeError(
"Error in identity_read_user_customer_secret_key: " + str(e.args))
##########################################################################
# Tenancy IAM Policies
##########################################################################
def __identity_read_tenancy_policies(self):
try:
policies_data = oci.pagination.list_call_get_all_results(
self.__regions[self.__home_region]['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query Policy resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
for policy in policies_data:
deep_link = self.__oci_policies_uri + policy.identifier
record = {
"id": policy.identifier,
"name": policy.display_name,
'deep_link': self.__generate_csv_hyperlink(deep_link, policy.display_name),
"compartment_id": policy.compartment_id,
"description": policy.additional_details['description'],
"lifecycle_state": policy.lifecycle_state,
"statements": policy.additional_details['statements']
}
self.__policies.append(record)
print("\tProcessed " + str(len(self.__policies)) + " IAM Policies")
return self.__policies
except Exception as e:
raise RuntimeError("Error in __identity_read_tenancy_policies: " + str(e.args))
############################################
# Load Identity Dynamic Groups
############################################
def __identity_read_dynamic_groups(self):
try:
dynamic_groups_data = oci.pagination.list_call_get_all_results(
self.__regions[self.__home_region]['identity_client'].list_dynamic_groups,
compartment_id=self.__tenancy.id).data
for dynamic_group in dynamic_groups_data:
deep_link = self.__oci_dynamic_groups_uri + dynamic_group.id
# try:
record = {
"id": dynamic_group.id,
"name": dynamic_group.name,
"deep_link": self.__generate_csv_hyperlink(deep_link, dynamic_group.name),
"description": dynamic_group.description,
"matching_rule": dynamic_group.matching_rule,
"time_created": dynamic_group.time_created.strftime(self.__iso_time_format),
"inactive_status": dynamic_group.inactive_status,
"lifecycle_state": dynamic_group.lifecycle_state,
"defined_tags": dynamic_group.defined_tags,
"freeform_tags": dynamic_group.freeform_tags,
"compartment_id": dynamic_group.compartment_id,
"notes": ""
}
# except Exception as e:
# record = {
# "id": dynamic_group.id,
# "name": dynamic_group.name,
# "deep_link": self.__generate_csv_hyperlink(deep_link, dynamic_group.name),
# "description": "",
# "matching_rule": "",
# "time_created": "",
# "inactive_status": "",
# "lifecycle_state": "",
# "defined_tags": "",
# "freeform_tags": "",
# "compartment_id": "",
# "notes": str(e)
# }
self.__dynamic_groups.append(record)
print("\tProcessed " + str(len(self.__dynamic_groups)) + " Dynamic Groups")
return self.__dynamic_groups
except Exception as e:
raise RuntimeError("Error in __identity_read_dynamic_groups: " + str(e.args))
pass
############################################
# Load Availlability Domains
############################################
def __identity_read_availability_domains(self):
try:
for region_key, region_values in self.__regions.items():
region_values['availability_domains'] = oci.pagination.list_call_get_all_results(
region_values['identity_client'].list_availability_domains,
compartment_id=self.__tenancy.id
).data
print("\tProcessed " + str(len(region_values['availability_domains'])) + " Availability Domains in " + region_key)
except Exception as e:
raise RuntimeError(
"Error in __identity_read_availability_domains: " + str(e.args))
##########################################################################
# Get Objects Store Buckets
##########################################################################
def __os_read_buckets(self):
# Getting OS Namespace
try:
# looping through regions
for region_key, region_values in self.__regions.items():
buckets_data = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query Bucket resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
# Getting Bucket Info
for bucket in buckets_data:
try:
bucket_info = region_values['os_client'].get_bucket(
bucket.additional_details['namespace'], bucket.display_name).data
deep_link = self.__oci_buckets_uri + bucket_info.namespace + "/" + bucket_info.name + "/objects?region=" + region_key
record = {
"id": bucket_info.id,
"name": bucket_info.name,
"deep_link": self.__generate_csv_hyperlink(deep_link, bucket_info.name),
"kms_key_id": bucket_info.kms_key_id,
"namespace": bucket_info.namespace,
"compartment_id": bucket_info.compartment_id,
"object_events_enabled": bucket_info.object_events_enabled,
"public_access_type": bucket_info.public_access_type,
"replication_enabled": bucket_info.replication_enabled,
"is_read_only": bucket_info.is_read_only,
"storage_tier": bucket_info.storage_tier,
"time_created": bucket_info.time_created.strftime(self.__iso_time_format),
"versioning": bucket_info.versioning,
"defined_tags": bucket_info.defined_tags,
"freeform_tags": bucket_info.freeform_tags,
"region": region_key,
"notes": ""
}
self.__buckets.append(record)
except Exception as e:
record = {
"id": "",
"name": bucket.display_name,
"deep_link": "",
"kms_key_id": "",
"namespace": bucket.additional_details['namespace'],
"compartment_id": bucket.compartment_id,
"object_events_enabled": "",
"public_access_type": "",
"replication_enabled": "",
"is_read_only": "",
"storage_tier": "",
"time_created": bucket.time_created.strftime(self.__iso_time_format),
"versioning": "",
"defined_tags": bucket.defined_tags,
"freeform_tags": "",
"region": region_key,
"notes": str(e)
}
self.__buckets.append(record)
# Returning Buckets
print("\tProcessed " + str(len(self.__buckets)) + " Buckets")
return self.__buckets
except Exception as e:
raise RuntimeError("Error in __os_read_buckets " + str(e.args))
############################################
# Load Block Volumes
############################################
def __block_volume_read_block_volumes(self):
try:
for region_key, region_values in self.__regions.items():
volumes_data = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query Volume resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
# Getting Block Volume inf
for volume in volumes_data:
deep_link = self.__oci_block_volumes_uri + volume.identifier + '?region=' + region_key
try:
record = {
"id": volume.identifier,
"display_name": volume.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, volume.display_name),
"kms_key_id": volume.additional_details['kmsKeyId'],
"lifecycle_state": volume.lifecycle_state,
"compartment_id": volume.compartment_id,
"size_in_gbs": volume.additional_details['sizeInGBs'],
"size_in_mbs": volume.additional_details['sizeInMBs'],
# "source_details": volume.source_details,
"time_created": volume.time_created.strftime(self.__iso_time_format),
# "volume_group_id": volume.volume_group_id,
# "vpus_per_gb": volume.vpus_per_gb,
# "auto_tuned_vpus_per_gb": volume.auto_tuned_vpus_per_gb,
"availability_domain": volume.availability_domain,
# "block_volume_replicas": volume.block_volume_replicas,
# "is_auto_tune_enabled": volume.is_auto_tune_enabled,
# "is_hydrated": volume.is_hydrated,
"defined_tags": volume.defined_tags,
"freeform_tags": volume.freeform_tags,
"system_tags": volume.system_tags,
"region": region_key,
"notes": ""
}
except Exception as e:
record = {
"id": volume.identifier,
"display_name": volume.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, volume.display_name),
"kms_key_id": "",
"lifecycle_state": "",
"compartment_id": "",
"size_in_gbs": "",
"size_in_mbs": "",
# "source_details": "",
"time_created": "",
# "volume_group_id": "",
# "vpus_per_gb": "",
# "auto_tuned_vpus_per_gb": "",
"availability_domain": "",
# "block_volume_replicas": "",
# "is_auto_tune_enabled": "",
# "is_hydrated": "",
"defined_tags": "",
"freeform_tags": "",
"system_tags": "",
"region": region_key,
"notes": str(e)
}
self.__block_volumes.append(record)
print("\tProcessed " + str(len(self.__block_volumes)) + " Block Volumes")
return self.__block_volumes
except Exception as e:
raise RuntimeError("Error in __block_volume_read_block_volumes " + str(e.args))
############################################
# Load Boot Volumes
############################################
def __boot_volume_read_boot_volumes(self):
try:
for region_key, region_values in self.__regions.items():
boot_volumes_data = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query BootVolume resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
for boot_volume in boot_volumes_data:
deep_link = self.__oci_boot_volumes_uri + boot_volume.identifier + '?region=' + region_key
try:
record = {
"id": boot_volume.identifier,
"display_name": boot_volume.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, boot_volume.display_name),
# "image_id": boot_volume.image_id,
"kms_key_id": boot_volume.additional_details['kmsKeyId'],
"lifecycle_state": boot_volume.lifecycle_state,
"size_in_gbs": boot_volume.additional_details['sizeInGBs'],
"size_in_mbs": boot_volume.additional_details['sizeInMBs'],
"availability_domain": boot_volume.availability_domain,
"time_created": boot_volume.time_created.strftime(self.__iso_time_format),
"compartment_id": boot_volume.compartment_id,
# "auto_tuned_vpus_per_gb": boot_volume.auto_tuned_vpus_per_gb,
# "boot_volume_replicas": boot_volume.boot_volume_replicas,
# "is_auto_tune_enabled": boot_volume.is_auto_tune_enabled,
# "is_hydrated": boot_volume.is_hydrated,
# "source_details": boot_volume.source_details,
# "vpus_per_gb": boot_volume.vpus_per_gb,
"system_tags": boot_volume.system_tags,
"defined_tags": boot_volume.defined_tags,
"freeform_tags": boot_volume.freeform_tags,
"region": region_key,
"notes": ""
}
except Exception as e:
record = {
"id": boot_volume.identifier,
"display_name": boot_volume.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, boot_volume.display_name),
# "image_id": "",
"kms_key_id": "",
"lifecycle_state": "",
"size_in_gbs": "",
"size_in_mbs": "",
"availability_domain": "",
"time_created": "",
"compartment_id": "",
# "auto_tuned_vpus_per_gb": "",
# "boot_volume_replicas": "",
# "is_auto_tune_enabled": "",
# "is_hydrated": "",
# "source_details": "",
# "vpus_per_gb": "",
"system_tags": "",
"defined_tags": "",
"freeform_tags": "",
"region": region_key,
"notes": str(e)
}
self.__boot_volumes.append(record)
print("\tProcessed " + str(len(self.__boot_volumes)) + " Boot Volumes")
return (self.__boot_volumes)
except Exception as e:
raise RuntimeError("Error in __boot_volume_read_boot_volumes " + str(e.args))
############################################
# Load FSS
############################################
def __fss_read_fsss(self):
try:
for region_key, region_values in self.__regions.items():
fss_data = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query FileSystem resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
for fss in fss_data:
deep_link = self.__oci_fss_uri + fss.identifier + '?region=' + region_key
try:
record = {
"id": fss.identifier,
"display_name": fss.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, fss.display_name),
"kms_key_id": fss.additional_details['kmsKeyId'],
"lifecycle_state": fss.lifecycle_state,
# "lifecycle_details": fss.lifecycle_details,
"availability_domain": fss.availability_domain,
"time_created": fss.time_created.strftime(self.__iso_time_format),
"compartment_id": fss.compartment_id,
# "is_clone_parent": fss.is_clone_parent,
# "is_hydrated": fss.is_hydrated,
# "metered_bytes": fss.metered_bytes,
"source_details": fss.additional_details['sourceDetails'],
"defined_tags": fss.defined_tags,
"freeform_tags": fss.freeform_tags,
"region": region_key,
"notes": ""
}
except Exception as e:
record = {
"id": fss.identifier,
"display_name": fss.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, fss.display_name),
"kms_key_id": "",
"lifecycle_state": "",
# "lifecycle_details": "",
"availability_domain": "",
"time_created": "",
"compartment_id": "",
# "is_clone_parent": "",
# "is_hydrated": "",
# "metered_bytes": "",
"source_details": "",
"defined_tags": "",
"freeform_tags": "",
"region": region_key,
"notes": str(e)
}
self.__file_storage_system.append(record)
print("\tProcessed " + str(len(self.__file_storage_system)) + " File Storage service")
return (self.__file_storage_system)
except Exception as e:
raise RuntimeError("Error in __fss_read_fsss " + str(e.args))
##########################################################################
# Network Security Groups
##########################################################################
def __network_read_network_security_groups_rules(self):
self.__network_security_groups = []
# Loopig Through Compartments Except Managed
try:
for region_key, region_values in self.__regions.items():
nsgs_data = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query NetworkSecurityGroup resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
# Looping through NSGs to to get
for nsg in nsgs_data:
deep_link = self.__oci_networking_uri + nsg.additional_details['vcnId'] + "/network-security-groups/" + nsg.identifier + '?region=' + region_key
record = {
"id": nsg.identifier,
"compartment_id": nsg.compartment_id,
"display_name": nsg.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, nsg.display_name),
"lifecycle_state": nsg.lifecycle_state,
"time_created": nsg.time_created.strftime(self.__iso_time_format),
"vcn_id": nsg.additional_details['vcnId'],
"freeform_tags": nsg.freeform_tags,
"defined_tags": nsg.defined_tags,
"region": region_key,
"rules": []
}
nsg_rules = oci.pagination.list_call_get_all_results(
region_values['network_client'].list_network_security_group_security_rules,
network_security_group_id=nsg.identifier
).data
for rule in nsg_rules:
deep_link = self.__oci_networking_uri + nsg.additional_details['vcnId'] + "/network-security-groups/" + nsg.identifier + "/nsg-rules" + '?region=' + region_key
rule_record = {
"id": rule.id,
"deep_link": self.__generate_csv_hyperlink(deep_link, rule.id),
"destination": rule.destination,
"destination_type": rule.destination_type,
"direction": rule.direction,
"icmp_options": rule.icmp_options,
"is_stateless": rule.is_stateless,
"is_valid": rule.is_valid,
"protocol": rule.protocol,
"source": rule.source,
"source_type": rule.source_type,
"tcp_options": rule.tcp_options,
"time_created": rule.time_created.strftime(self.__iso_time_format),
"udp_options": rule.udp_options,
}
# Append NSG Rules to NSG
record['rules'].append(rule_record)
# Append NSG to list of NSGs
self.__network_security_groups.append(record)
print("\tProcessed " + str(len(self.__network_security_groups)) + " Network Security Groups")
return self.__network_security_groups
except Exception as e:
raise RuntimeError(
"Error in __network_read_network_security_groups_rules " + str(e.args))
##########################################################################
# Network Security Lists
##########################################################################
def __network_read_network_security_lists(self):
# Looping Through Compartments Except Managed
try:
for region_key, region_values in self.__regions.items():
security_lists_data = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query SecurityList resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
# Looping through Security Lists to to get
for security_list in security_lists_data:
deep_link = self.__oci_networking_uri + security_list.additional_details['vcnId'] + "/security-lists/" + security_list.identifier + '?region=' + region_key
record = {
"id": security_list.identifier,
"compartment_id": security_list.compartment_id,
"display_name": security_list.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, security_list.display_name),
"lifecycle_state": security_list.lifecycle_state,
"time_created": security_list.time_created.strftime(self.__iso_time_format),
"vcn_id": security_list.additional_details['vcnId'],
"region": region_key,
"freeform_tags": security_list.freeform_tags,
"defined_tags": security_list.defined_tags,
"ingress_security_rules": [],
"egress_security_rules": []
}
if security_list.additional_details['egressSecurityRules'] is not None:
for i in range(len(security_list.additional_details['egressSecurityRules'])):
erule = {
# "description": egress_rule.description,
"destination": security_list.additional_details['egressSecurityRules'][i]['destination'],
# "destination_type": egress_rule.destination_type,
"icmp_options": security_list.additional_details['egressSecurityRules'][i]['icmpOptions'],
"is_stateless": security_list.additional_details['egressSecurityRules'][i]['isStateless'],
"protocol": security_list.additional_details['egressSecurityRules'][i]['protocol'],
"tcp_options": security_list.additional_details['egressSecurityRules'][i]['tcpOptions'],
"udp_options": security_list.additional_details['egressSecurityRules'][i]['udpOptions']
}
record['egress_security_rules'].append(erule)
if security_list.additional_details['ingressSecurityRules'] is not None:
for i in range(len(security_list.additional_details['ingressSecurityRules'])):
irule = {
# "description": ingress_rule.description,
"source": security_list.additional_details['ingressSecurityRules'][i]['source'],
# "source_type": ingress_rule.source_type,
"icmp_options": security_list.additional_details['ingressSecurityRules'][i]['icmpOptions'],
"is_stateless": security_list.additional_details['ingressSecurityRules'][i]['isStateless'],
"protocol": security_list.additional_details['ingressSecurityRules'][i]['protocol'],
"tcp_options": security_list.additional_details['ingressSecurityRules'][i]['tcpOptions'],
"udp_options": security_list.additional_details['ingressSecurityRules'][i]['udpOptions']
}
record['ingress_security_rules'].append(irule)
# Append Security List to list of NSGs
self.__network_security_lists.append(record)
print("\tProcessed " + str(len(self.__network_security_lists)) + " Security Lists")
return self.__network_security_lists
except Exception as e:
raise RuntimeError(
"Error in __network_read_network_security_lists " + str(e.args))
##########################################################################
# Network Subnets Lists
##########################################################################
def __network_read_network_subnets(self):
try:
for region_key, region_values in self.__regions.items():
subnets_data = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query Subnet resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
try:
for subnet in subnets_data:
deep_link = self.__oci_networking_uri + subnet.additional_details['vcnId'] + "/subnets/" + subnet.identifier + '?region=' + region_key
record = {
"id": subnet.identifier,
"availability_domain": subnet.availability_domain,
"cidr_block": subnet.additional_details['cidrBlock'],
"compartment_id": subnet.compartment_id,
"dhcp_options_id": subnet.additional_details['dhcpOptionsId'],
"display_name": subnet.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, subnet.display_name),
"dns_label": subnet.additional_details['dnsLabel'],
"ipv6_cidr_block": subnet.additional_details['ipv6CidrBlock'],
"ipv6_virtual_router_ip": subnet.additional_details['ipv6VirtualRouterIp'],
"lifecycle_state": subnet.lifecycle_state,
"prohibit_public_ip_on_vnic": subnet.additional_details['prohibitPublicIpOnVnic'],
"route_table_id": subnet.additional_details['routeTableId'],
"security_list_ids": subnet.additional_details['securityListIds'],
"subnet_domain_name": subnet.additional_details['subnetDomainName'],
"time_created": subnet.time_created.strftime(self.__iso_time_format),
"vcn_id": subnet.additional_details['vcnId'],
"virtual_router_ip": subnet.additional_details['virtualRouterIp'],
"virtual_router_mac": subnet.additional_details['virtualRouterMac'],
"freeform_tags": subnet.freeform_tags,
"define_tags": subnet.defined_tags,
"region": region_key,
"notes": ""
}
# Adding subnet to subnet list
self.__network_subnets.append(record)
except Exception as e:
deep_link = self.__oci_networking_uri + subnet.additional_details['vcnId'] + "/subnet/" + subnet.identifier + '?region=' + region_key
record = {
"id": subnet.identifier,
"availability_domain": subnet.availability_domain,
"cidr_block": subnet.additional_details['cidrBlock'],
"compartment_id": subnet.compartment_id,
"dhcp_options_id": subnet.additional_details['dhcpOptionsId'],
"display_name": subnet.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, subnet.display_name),
"dns_label": subnet.additional_details['dnsLabel'],
"ipv6_cidr_block": "",
"ipv6_virtual_router_ip": "",
"lifecycle_state": subnet.lifecycle_state,
"prohibit_public_ip_on_vnic": subnet.additional_details['prohibitPublicIpOnVnic'],
"route_table_id": subnet.additional_details['routeTableId'],
"security_list_ids": subnet.additional_details['securityListIds'],
"subnet_domain_name": subnet.additional_details['subnetDomainName'],
"time_created": subnet.time_created.strftime(self.__iso_time_format),
"vcn_id": subnet.additional_details['vcnId'],
"virtual_router_ip": subnet.additional_details['virtualRouterIp'],
"virtual_router_mac": subnet.additional_details['virtualRouterMac'],
"region": region_key,
"notes": str(e)
}
self.__network_subnets.append(record)
print("\tProcessed " + str(len(self.__network_subnets)) + " Network Subnets")
return self.__network_subnets
except Exception as e:
raise RuntimeError(
"Error in __network_read_network_subnets " + str(e.args))
##########################################################################
# Load DRG Attachments
##########################################################################
def __network_read_drg_attachments(self):
count_of_drg_attachments = 0
try:
for region_key, region_values in self.__regions.items():
# Looping through compartments in tenancy
drg_resources = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query DrgAttachment resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
compartments = set()
for drg in drg_resources:
compartments.add(drg.compartment_id)
for compartment in compartments:
drg_attachment_data = oci.pagination.list_call_get_all_results(
region_values['network_client'].list_drg_attachments,
compartment_id=compartment,
lifecycle_state="ATTACHED",
attachment_type="ALL"
).data
# Looping through DRG Attachments in a compartment
for drg_attachment in drg_attachment_data:
deep_link = self.__oci_drg_uri + drg_attachment.drg_id + "/drg-attachment/" + drg_attachment.id + '?region=' + region_key
try:
record = {
"id": drg_attachment.id,
"display_name": drg_attachment.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, drg_attachment.display_name),
"drg_id": drg_attachment.drg_id,
"vcn_id": drg_attachment.vcn_id,
"drg_route_table_id": str(drg_attachment.drg_route_table_id),
"export_drg_route_distribution_id": str(drg_attachment.export_drg_route_distribution_id),
"is_cross_tenancy": drg_attachment.is_cross_tenancy,
"lifecycle_state": drg_attachment.lifecycle_state,
"network_details": drg_attachment.network_details,
"network_id": drg_attachment.network_details.id,
"network_type": drg_attachment.network_details.type,
"freeform_tags": drg_attachment.freeform_tags,
"define_tags": drg_attachment.defined_tags,
"time_created": drg_attachment.time_created.strftime(self.__iso_time_format),
"region": region_key,
"notes": ""
}
except Exception:
record = {
"id": drg_attachment.id,
"display_name": drg_attachment.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, drg_attachment.display_name),
"drg_id": drg_attachment.drg_id,
"vcn_id": drg_attachment.vcn_id,
"drg_route_table_id": str(drg_attachment.drg_route_table_id),
"export_drg_route_distribution_id": str(drg_attachment.export_drg_route_distribution_id),
"is_cross_tenancy": drg_attachment.is_cross_tenancy,
"lifecycle_state": drg_attachment.lifecycle_state,
"network_details": drg_attachment.network_details,
"network_id": "",
"network_type": "",
"freeform_tags": drg_attachment.freeform_tags,
"define_tags": drg_attachment.defined_tags,
"time_created": drg_attachment.time_created.strftime(self.__iso_time_format),
"region": region_key,
"notes": ""
}
# Adding DRG Attachment to DRG Attachments list
try:
self.__network_drg_attachments[drg_attachment.drg_id].append(record)
except Exception:
self.__network_drg_attachments[drg_attachment.drg_id] = []
self.__network_drg_attachments[drg_attachment.drg_id].append(record)
# Counter
count_of_drg_attachments += 1
print("\tProcessed " + str(count_of_drg_attachments) + " DRG Attachments")
return self.__network_drg_attachments
except Exception as e:
raise RuntimeError(
"Error in __network_read_drg_attachments " + str(e.args))
##########################################################################
# Load DRGs
##########################################################################
def __network_read_drgs(self):
try:
for region_key, region_values in self.__regions.items():
# Looping through compartments in tenancy
drg_resources = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query Drg resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
compartments = set()
for drg in drg_resources:
compartments.add(drg.compartment_id)
for compartment in compartments:
drg_data = oci.pagination.list_call_get_all_results(
region_values['network_client'].list_drgs,
compartment_id=compartment,
).data
# Looping through DRGs in a compartment
for drg in drg_data:
deep_link = self.__oci_drg_uri + drg.id + '?region=' + region_key
# Fetch DRG Upgrade status
try:
upgrade_status = region_values['network_client'].get_upgrade_status(drg.id).data.status
except Exception:
upgrade_status = "Not Available"
try:
record = {
"id": drg.id,
"display_name": drg.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, drg.display_name),
"default_drg_route_tables": drg.default_drg_route_tables,
"default_ipsec_tunnel_route_table": drg.default_drg_route_tables.ipsec_tunnel,
"default_remote_peering_connection_route_table": drg.default_drg_route_tables.remote_peering_connection,
"default_vcn_table": drg.default_drg_route_tables.vcn,
"default_virtual_circuit_route_table": drg.default_drg_route_tables.virtual_circuit,
"default_export_drg_route_distribution_id": drg.default_export_drg_route_distribution_id,
"compartment_id": drg.compartment_id,
"lifecycle_state": drg.lifecycle_state,
"upgrade_status": upgrade_status,
"time_created": drg.time_created.strftime(self.__iso_time_format),
"freeform_tags": drg.freeform_tags,
"define_tags": drg.defined_tags,
"region": region_key,
"notes": ""
}
except Exception as e:
record = {
"id": drg.id,
"display_name": drg.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, drg.display_name),
"default_drg_route_tables": drg.default_drg_route_tables,
"default_ipsec_tunnel_route_table": "",
"default_remote_peering_connection_route_table": "",
"default_vcn_table": "",
"default_virtual_circuit_route_table": "",
"default_export_drg_route_distribution_id": drg.default_export_drg_route_distribution_id,
"compartment_id": drg.compartment_id,
"lifecycle_state": drg.lifecycle_state,
"upgrade_status": upgrade_status,
"time_created": drg.time_created.strftime(self.__iso_time_format),
"freeform_tags": drg.freeform_tags,
"define_tags": drg.defined_tags,
"region": region_key,
"notes": str(e)
}
# for Raw Data
self.__raw_network_drgs.append(record)
# For Checks data
self.__network_drgs[drg.id] = record
print("\tProcessed " + str(len(self.__network_drgs)) + " Dynamic Routing Gateways")
return self.__network_drgs
except Exception as e:
raise RuntimeError(
"Error in __network_read_drgs " + str(e.args))
##########################################################################
# Load Network FastConnect
##########################################################################
def __network_read_fastonnects(self):
try:
for region_key, region_values in self.__regions.items():
# Looping through compartments in tenancy
fastconnects = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query VirtualCircuit resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
compartments = set()
for vc in fastconnects:
compartments.add(vc.compartment_id)
for compartment in compartments:
fastconnect_data = oci.pagination.list_call_get_all_results(
region_values['network_client'].list_virtual_circuits,
compartment_id=compartment,
).data
# lifecycle_state="PROVISIONED"
# Looping through fastconnects in a compartment
for fastconnect in fastconnect_data:
deep_link = self.__oci_fastconnect_uri + fastconnect.id + '?region=' + region_key
try:
record = {
"id": fastconnect.id,
"display_name": fastconnect.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, fastconnect.display_name),
"bandwidth_shape_name": fastconnect.bandwidth_shape_name,
"bgp_admin_state": fastconnect.bgp_admin_state,
"bgp_ipv6_session_state": fastconnect.bgp_ipv6_session_state,
"bgp_management": fastconnect.bgp_management,
"bgp_session_state": fastconnect.bgp_session_state,
"compartment_id": fastconnect.compartment_id,
"cross_connect_mappings": fastconnect.cross_connect_mappings,
"customer_asn": fastconnect.customer_asn,
"customer_bgp_asn": fastconnect.customer_bgp_asn,
"gateway_id": fastconnect.gateway_id,
"ip_mtu": fastconnect.ip_mtu,
"is_bfd_enabled": fastconnect.is_bfd_enabled,
"lifecycle_state": fastconnect.lifecycle_state,
"oracle_bgp_asn": fastconnect.oracle_bgp_asn,
"provider_name": fastconnect.provider_name,
"provider_service_id": fastconnect.provider_service_id,
"provider_service_key_name": fastconnect.provider_service_id,
"provider_service_name": fastconnect.provider_service_name,
"provider_state": fastconnect.provider_state,
"public_prefixes": fastconnect.public_prefixes,
"reference_comment": fastconnect.reference_comment,
"fastconnect_region": fastconnect.region,
"routing_policy": fastconnect.routing_policy,
"service_type": fastconnect.service_type,
"time_created": fastconnect.time_created.strftime(self.__iso_time_format),
"type": fastconnect.type,
"freeform_tags": fastconnect.freeform_tags,
"define_tags": fastconnect.defined_tags,
"region": region_key,
"notes": ""
}
# Adding fastconnect to fastconnect dict
except Exception as e:
record = {
"id": fastconnect.id,
"display_name": fastconnect.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, fastconnect.display_name),
"bandwidth_shape_name": "",
"bgp_admin_state": "",
"bgp_ipv6_session_state": "",
"bgp_management": "",
"bgp_session_state": "",
"compartment_id": fastconnect.compartment_id,
"cross_connect_mappings": "",
"customer_asn": "",
"customer_bgp_asn": "",
"gateway_id": "",
"ip_mtu": "",
"is_bfd_enabled": "",
"lifecycle_state": "",
"oracle_bgp_asn": "",
"provider_name": "",
"provider_service_id": "",
"provider_service_key_name": "",
"provider_service_name": "",
"provider_state": "",
"public_prefixes": "",
"reference_comment": "",
"fastconnect_region": "",
"routing_policy": "",
"service_type": "",
"time_created": "",
"type": "",
"freeform_tags": "",
"define_tags": "",
"region": region_key,
"notes": str(e)
}
# Adding fastconnect to fastconnect dict
try:
self.__network_fastconnects[fastconnect.gateway_id].append(record)
except Exception:
self.__network_fastconnects[fastconnect.gateway_id] = []
self.__network_fastconnects[fastconnect.gateway_id].append(record)
print("\tProcessed " + str(len((list(itertools.chain.from_iterable(self.__network_fastconnects.values()))))) + " FastConnects")
return self.__network_fastconnects
except Exception as e:
raise RuntimeError(
"Error in __network_read_fastonnects " + str(e.args))
##########################################################################
# Load IP Sec Connections
##########################################################################
def __network_read_ip_sec_connections(self):
try:
for region_key, region_values in self.__regions.items():
ip_sec_connections_data = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query IPSecConnection resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
for ip_sec in ip_sec_connections_data:
try:
deep_link = self.__oci_ipsec_uri + ip_sec.identifier + '?region=' + region_key
record = {
"id": ip_sec.identifier,
"display_name": ip_sec.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, ip_sec.display_name),
"cpe_id": ip_sec.additional_details['cpeId'],
"drg_id": ip_sec.additional_details['drgId'],
"compartment_id": ip_sec.compartment_id,
# "cpe_local_identifier": ip_sec.cpe_local_identifier,
# "cpe_local_identifier_type": ip_sec.cpe_local_identifier_type,
"lifecycle_state": ip_sec.lifecycle_state,
"freeform_tags": ip_sec.freeform_tags,
"define_tags": ip_sec.defined_tags,
"region": region_key,
"tunnels": [],
"number_tunnels_up": 0,
"tunnels_up": True, # It is true unless I find out otherwise
"notes": ""
}
# Getting Tunnel Data
try:
ip_sec_tunnels_data = oci.pagination.list_call_get_all_results(
region_values['network_client'].list_ip_sec_connection_tunnels,
ipsc_id=ip_sec.identifier,
).data
for tunnel in ip_sec_tunnels_data:
deep_link = self.__oci_ipsec_uri + ip_sec.identifier + "/tunnels/" + tunnel.id + '?region=' + region_key
tunnel_record = {
"id": tunnel.id,
"cpe_ip": tunnel.cpe_ip,
"display_name": tunnel.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, tunnel.display_name),
"vpn_ip": tunnel.vpn_ip,
"ike_version": tunnel.ike_version,
"encryption_domain_config": tunnel.encryption_domain_config,
"lifecycle_state": tunnel.lifecycle_state,
"nat_translation_enabled": tunnel.nat_translation_enabled,
"bgp_session_info": tunnel.bgp_session_info,
"oracle_can_initiate": tunnel.oracle_can_initiate,
"routing": tunnel.routing,
"status": tunnel.status,
"compartment_id": tunnel.compartment_id,
"dpd_mode": tunnel.dpd_mode,
"dpd_timeout_in_sec": tunnel.dpd_timeout_in_sec,
"time_created": tunnel.time_created.strftime(self.__iso_time_format),
"time_status_updated": str(tunnel.time_status_updated),
"notes": ""
}
if tunnel_record['status'].upper() == "UP":
record['number_tunnels_up'] += 1
else:
record['tunnels_up'] = False
record["tunnels"].append(tunnel_record)
except Exception:
print("\t Unable to tunnels for ip_sec_connection: " + ip_sec.display_name + " id: " + ip_sec.identifier)
record['tunnels_up'] = False
except Exception:
record = {
"id": ip_sec.identifier,
"display_name": ip_sec.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, ip_sec.display_name),
"cpe_id": "",
"drg_id": "",
"compartment_id": ip_sec.compartment_id,
"cpe_local_identifier": "",
"cpe_local_identifier_type": "",
"lifecycle_state": "",
"freeform_tags": "",
"define_tags": "",
"region": region_key,
"tunnels": [],
"number_tunnels_up": 0,
"tunnels_up": False,
"notes": ""
}
try:
self.__network_ipsec_connections[ip_sec.additional_details['drgId']].append(record)
except Exception:
self.__network_ipsec_connections[ip_sec.additional_details['drgId']] = []
self.__network_ipsec_connections[ip_sec.additional_details['drgId']].append(record)
print("\tProcessed " + str(len((list(itertools.chain.from_iterable(self.__network_ipsec_connections.values()))))) + " IP SEC Conenctions")
return self.__network_ipsec_connections
except Exception as e:
raise RuntimeError(
"Error in __network_read_ip_sec_connections " + str(e.args))
############################################
# Load Autonomous Databases
############################################
def __adb_read_adbs(self):
try:
for region_key, region_values in self.__regions.items():
# UPDATED JB
adb_query_resources = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query AutonomousDatabase resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
compartments = set()
for adb in adb_query_resources:
compartments.add(adb.compartment_id)
for compartment in compartments:
autonomous_databases = oci.pagination.list_call_get_all_results(
region_values['adb_client'].list_autonomous_databases,
compartment_id=compartment
).data
for adb in autonomous_databases:
try:
deep_link = self.__oci_adb_uri + adb.id + '?region=' + region_key
# Issue 295 fixed
if adb.lifecycle_state not in [ oci.database.models.AutonomousDatabaseSummary.LIFECYCLE_STATE_TERMINATED, oci.database.models.AutonomousDatabaseSummary.LIFECYCLE_STATE_TERMINATING, oci.database.models.AutonomousDatabaseSummary.LIFECYCLE_STATE_UNAVAILABLE ]:
record = {
"id": adb.id,
"display_name": adb.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, adb.display_name),
"apex_details": adb.apex_details,
"are_primary_whitelisted_ips_used": adb.are_primary_whitelisted_ips_used,
"autonomous_container_database_id": adb.autonomous_container_database_id,
"autonomous_maintenance_schedule_type": adb.autonomous_maintenance_schedule_type,
"available_upgrade_versions": adb.available_upgrade_versions,
"backup_config": adb.backup_config,
"compartment_id": adb.compartment_id,
"connection_strings": adb.connection_strings,
"connection_urls": adb.connection_urls,
"cpu_core_count": adb.cpu_core_count,
"customer_contacts": adb.cpu_core_count,
"data_safe_status": adb.data_safe_status,
"data_storage_size_in_gbs": adb.data_storage_size_in_gbs,
"data_storage_size_in_tbs": adb.data_storage_size_in_tbs,
"database_management_status": adb.database_management_status,
"dataguard_region_type": adb.dataguard_region_type,
"db_name": adb.db_name,
"db_version": adb.db_version,
"db_workload": adb.db_workload,
"defined_tags": adb.defined_tags,
"failed_data_recovery_in_seconds": adb.failed_data_recovery_in_seconds,
"freeform_tags": adb.freeform_tags,
"infrastructure_type": adb.infrastructure_type,
"is_access_control_enabled": adb.is_access_control_enabled,
"is_auto_scaling_enabled": adb.is_auto_scaling_enabled,
"is_data_guard_enabled": adb.is_data_guard_enabled,
"is_dedicated": adb.is_dedicated,
"is_free_tier": adb.is_free_tier,
"is_mtls_connection_required": adb.is_mtls_connection_required,
"is_preview": adb.is_preview,
"is_reconnect_clone_enabled": adb.is_reconnect_clone_enabled,
"is_refreshable_clone": adb.is_refreshable_clone,
"key_history_entry": adb.key_history_entry,
"key_store_id": adb.key_store_id,
"key_store_wallet_name": adb.key_store_wallet_name,
"kms_key_id": adb.kms_key_id,
"kms_key_lifecycle_details": adb.kms_key_lifecycle_details,
"kms_key_version_id": adb.kms_key_version_id,
"license_model": adb.license_model,
"lifecycle_details": adb.lifecycle_details,
"lifecycle_state": adb.lifecycle_state,
"nsg_ids": adb.nsg_ids,
"ocpu_count": adb.ocpu_count,
"open_mode": adb.open_mode,
"operations_insights_status": adb.operations_insights_status,
"peer_db_ids": adb.peer_db_ids,
"permission_level": adb.permission_level,
"private_endpoint": adb.private_endpoint,
"private_endpoint_ip": adb.private_endpoint_ip,
"private_endpoint_label": adb.private_endpoint_label,
"refreshable_mode": adb.refreshable_mode,
"refreshable_status": adb.refreshable_status,
"role": adb.role,
"scheduled_operations": adb.scheduled_operations,
"service_console_url": adb.service_console_url,
"source_id": adb.source_id,
"standby_whitelisted_ips": adb.standby_whitelisted_ips,
"subnet_id": adb.subnet_id,
"supported_regions_to_clone_to": adb.supported_regions_to_clone_to,
"system_tags": adb.system_tags,
"time_created": adb.time_created.strftime(self.__iso_time_format),
"time_data_guard_role_changed": str(adb.time_data_guard_role_changed),
"time_deletion_of_free_autonomous_database": str(adb.time_deletion_of_free_autonomous_database),
"time_local_data_guard_enabled": str(adb.time_local_data_guard_enabled),
"time_maintenance_begin": str(adb.time_maintenance_begin),
"time_maintenance_end": str(adb.time_maintenance_end),
"time_of_last_failover": str(adb.time_of_last_failover),
"time_of_last_refresh": str(adb.time_of_last_refresh),
"time_of_last_refresh_point": str(adb.time_of_last_refresh_point),
"time_of_last_switchover": str(adb.time_of_last_switchover),
"time_of_next_refresh": str(adb.time_of_next_refresh),
"time_reclamation_of_free_autonomous_database": str(adb.time_reclamation_of_free_autonomous_database),
"time_until_reconnect_clone_enabled": str(adb.time_until_reconnect_clone_enabled),
"used_data_storage_size_in_tbs": str(adb.used_data_storage_size_in_tbs),
"vault_id": adb.vault_id,
"whitelisted_ips": adb.whitelisted_ips,
"region": region_key,
"notes": ""
}
else:
record = {
"id": adb.id,
"display_name": adb.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, adb.display_name),
"apex_details": "",
"are_primary_whitelisted_ips_used": "",
"autonomous_container_database_id": "",
"autonomous_maintenance_schedule_type": "",
"available_upgrade_versions": "",
"backup_config": "",
"compartment_id": adb.compartment_id,
"connection_strings": "",
"connection_urls": "",
"cpu_core_count": "",
"customer_contacts": "",
"data_safe_status": "",
"data_storage_size_in_gbs": "",
"data_storage_size_in_tbs": "",
"database_management_status": "",
"dataguard_region_type": "",
"db_name": "",
"db_version": "",
"db_workload": "",
"defined_tags": "",
"failed_data_recovery_in_seconds": "",
"freeform_tags": "",
"infrastructure_type": "",
"is_access_control_enabled": "",
"is_auto_scaling_enabled": "",
"is_data_guard_enabled": "",
"is_dedicated": "",
"is_free_tier": "",
"is_mtls_connection_required": "",
"is_preview": "",
"is_reconnect_clone_enabled": "",
"is_refreshable_clone": "",
"key_history_entry": "",
"key_store_id": "",
"key_store_wallet_name": "",
"kms_key_id": "",
"kms_key_lifecycle_details": "",
"kms_key_version_id": "",
"license_model": "",
"lifecycle_details": "",
"lifecycle_state": adb.lifecycle_state,
"nsg_ids": "",
"ocpu_count": "",
"open_mode": "",
"operations_insights_status": "",
"peer_db_ids": "",
"permission_level": "",
"private_endpoint": "",
"private_endpoint_ip": "",
"private_endpoint_label": "",
"refreshable_mode": "",
"refreshable_status": "",
"role": "",
"scheduled_operations": "",
"service_console_url": "",
"source_id": "",
"standby_whitelisted_ips": "",
"subnet_id": "",
"supported_regions_to_clone_to": "",
"system_tags": "",
"time_created": "",
"time_data_guard_role_changed": "",
"time_deletion_of_free_autonomous_database": "",
"time_local_data_guard_enabled": "",
"time_maintenance_begin": "",
"time_maintenance_end": "",
"time_of_last_failover": "",
"time_of_last_refresh": "",
"time_of_last_refresh_point": "",
"time_of_last_switchover": "",
"time_of_next_refresh": "",
"time_reclamation_of_free_autonomous_database": "",
"time_until_reconnect_clone_enabled": "",
"used_data_storage_size_in_tbs": "",
"vault_id": "",
"whitelisted_ips": "",
"region": region_key,
"notes": ""
}
except Exception as e:
record = {
"id": "",
"display_name": "",
"deep_link": "",
"apex_details": "",
"are_primary_whitelisted_ips_used": "",
"autonomous_container_database_id": "",
"autonomous_maintenance_schedule_type": "",
"available_upgrade_versions": "",
"backup_config": "",
"compartment_id": "",
"connection_strings": "",
"connection_urls": "",
"cpu_core_count": "",
"customer_contacts": "",
"data_safe_status": "",
"data_storage_size_in_gbs": "",
"data_storage_size_in_tbs": "",
"database_management_status": "",
"dataguard_region_type": "",
"db_name": "",
"db_version": "",
"db_workload": "",
"defined_tags": "",
"failed_data_recovery_in_seconds": "",
"freeform_tags": "",
"infrastructure_type": "",
"is_access_control_enabled": "",
"is_auto_scaling_enabled": "",
"is_data_guard_enabled": "",
"is_dedicated": "",
"is_free_tier": "",
"is_mtls_connection_required": "",
"is_preview": "",
"is_reconnect_clone_enabled": "",
"is_refreshable_clone": "",
"key_history_entry": "",
"key_store_id": "",
"key_store_wallet_name": "",
"kms_key_id": "",
"kms_key_lifecycle_details": "",
"kms_key_version_id": "",
"license_model": "",
"lifecycle_details": "",
"lifecycle_state": "",
"nsg_ids": "",
"ocpu_count": "",
"open_mode": "",
"operations_insights_status": "",
"peer_db_ids": "",
"permission_level": "",
"private_endpoint": "",
"private_endpoint_ip": "",
"private_endpoint_label": "",
"refreshable_mode": "",
"refreshable_status": "",
"role": "",
"scheduled_operations": "",
"service_console_url": "",
"source_id": "",
"standby_whitelisted_ips": "",
"subnet_id": "",
"supported_regions_to_clone_to": "",
"system_tags": "",
"time_created": "",
"time_data_guard_role_changed": "",
"time_deletion_of_free_autonomous_database": "",
"time_local_data_guard_enabled": "",
"time_maintenance_begin": "",
"time_maintenance_end": "",
"time_of_last_failover": "",
"time_of_last_refresh": "",
"time_of_last_refresh_point": "",
"time_of_last_switchover": "",
"time_of_next_refresh": "",
"time_reclamation_of_free_autonomous_database": "",
"time_until_reconnect_clone_enabled": "",
"used_data_storage_size_in_tbs": "",
"vault_id": "",
"whitelisted_ips": "",
"region": region_key,
"notes": str(e)
}
self.__autonomous_databases.append(record)
print("\tProcessed " + str(len(self.__autonomous_databases)) + " Autonomous Databases")
return self.__autonomous_databases
except Exception as e:
raise RuntimeError("Error in __adb_read_adbs " + str(e.args))
############################################
# Load Oracle Integration Cloud
############################################
def __oic_read_oics(self):
try:
for region_key, region_values in self.__regions.items():
oic_resources = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query IntegrationInstance resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
compartments = set()
for oic_resource in oic_resources:
compartments.add(oic_resource.compartment_id)
for compartment in compartments:
oic_instances = oci.pagination.list_call_get_all_results(
region_values['oic_client'].list_integration_instances,
compartment_id=compartment
).data
for oic_instance in oic_instances:
if oic_instance.lifecycle_state == 'ACTIVE' or oic_instance.LIFECYCLE_STATE_INACTIVE == "INACTIVE":
deep_link = self.__oci_oicinstance_uri + oic_instance.id + '?region=' + region_key
try:
record = {
"id": oic_instance.id,
"display_name": oic_instance.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, oic_instance.display_name),
"network_endpoint_details": oic_instance.network_endpoint_details,
"compartment_id": oic_instance.compartment_id,
"alternate_custom_endpoints": oic_instance.alternate_custom_endpoints,
"consumption_model": oic_instance.consumption_model,
"custom_endpoint": oic_instance.custom_endpoint,
"instance_url": oic_instance.instance_url,
"integration_instance_type": oic_instance.integration_instance_type,
"is_byol": oic_instance.is_byol,
"is_file_server_enabled": oic_instance.is_file_server_enabled,
"is_visual_builder_enabled": oic_instance.is_visual_builder_enabled,
"lifecycle_state": oic_instance.lifecycle_state,
"message_packs": oic_instance.message_packs,
"state_message": oic_instance.state_message,
"time_created": oic_instance.time_created.strftime(self.__iso_time_format),
"time_updated": str(oic_instance.time_updated),
"region": region_key,
"notes": ""
}
except Exception as e:
record = {
"id": oic_instance.id,
"display_name": oic_instance.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, oic_instance.display_name),
"network_endpoint_details": "",
"compartment_id": "",
"alternate_custom_endpoints": "",
"consumption_model": "",
"custom_endpoint": "",
"instance_url": "",
"integration_instance_type": "",
"is_byol": "",
"is_file_server_enabled": "",
"is_visual_builder_enabled": "",
"lifecycle_state": "",
"message_packs": "",
"state_message": "",
"time_created": "",
"time_updated": "",
"region": region_key,
"notes": str(e)
}
self.__integration_instances.append(record)
print("\tProcessed " + str(len(self.__integration_instances)) + " Integration Instance")
return self.__integration_instances
except Exception as e:
raise RuntimeError("Error in __oic_read_oics " + str(e.args))
############################################
# Load Oracle Analytics Cloud
############################################
def __oac_read_oacs(self):
try:
for region_key, region_values in self.__regions.items():
oac_resources = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query AnalyticsInstance resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
compartments = set()
for resource in oac_resources:
compartments.add(resource.compartment_id)
for compartment in compartments:
oac_instances = oci.pagination.list_call_get_all_results(
region_values['oac_client'].list_analytics_instances,
compartment_id=compartment
).data
for oac_instance in oac_instances:
deep_link = self.__oci_oacinstance_uri + oac_instance.id + '?region=' + region_key
try:
record = {
"id": oac_instance.id,
"name": oac_instance.name,
"deep_link": self.__generate_csv_hyperlink(deep_link, oac_instance.name),
"description": oac_instance.description,
"network_endpoint_details": oac_instance.network_endpoint_details,
"network_endpoint_type": oac_instance.network_endpoint_details.network_endpoint_type,
"compartment_id": oac_instance.compartment_id,
"lifecycle_state": oac_instance.lifecycle_state,
"email_notification": oac_instance.email_notification,
"feature_set": oac_instance.feature_set,
"service_url": oac_instance.service_url,
"capacity": oac_instance.capacity,
"license_type": oac_instance.license_type,
"time_created": oac_instance.time_created.strftime(self.__iso_time_format),
"time_updated": str(oac_instance.time_updated),
"region": region_key,
"notes": ""
}
except Exception as e:
record = {
"id": oac_instance.id,
"name": oac_instance.name,
"deep_link": self.__generate_csv_hyperlink(deep_link, oac_instance.name),
"network_endpoint_details": "",
"compartment_id": "",
"lifecycle_state": "",
"email_notification": "",
"feature_set": "",
"service_url": "",
"capacity": "",
"license_type": "",
"time_created": "",
"time_updated": "",
"region": region_key,
"notes": str(e)
}
self.__analytics_instances.append(record)
print("\tProcessed " + str(len(self.__analytics_instances)) + " Analytics Instances")
return self.__analytics_instances
except Exception as e:
raise RuntimeError("Error in __oac_read_oacs " + str(e.args))
##########################################################################
# Events
##########################################################################
def __events_read_event_rules(self):
try:
for region_key, region_values in self.__regions.items():
events_rules_data = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query EventRule resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
for event_rule in events_rules_data:
deep_link = self.__oci_events_uri + event_rule.identifier + '?region=' + region_key
record = {
"compartment_id": event_rule.compartment_id,
"condition": event_rule.additional_details['condition'],
"description": event_rule.additional_details['description'],
"display_name": event_rule.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, event_rule.display_name),
"id": event_rule.identifier,
# "is_enabled": event_rule.is_enabled,
"lifecycle_state": event_rule.lifecycle_state,
"time_created": event_rule.time_created.strftime(self.__iso_time_format),
"region": region_key
}
self.__event_rules.append(record)
print("\tProcessed " + str(len(self.__event_rules)) + " Event Rules")
return self.__event_rules
except Exception as e:
raise RuntimeError("Error in events_read_rules " + str(e.args))
##########################################################################
# Logging - Log Groups and Logs
##########################################################################
def __logging_read_log_groups_and_logs(self):
try:
for region_key, region_values in self.__regions.items():
log_groups = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query LogGroup resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
# Looping through log groups to get logs
for log_group in log_groups:
deep_link = self.__oci_loggroup_uri + log_group.identifier + '?region=' + region_key
record = {
"compartment_id": log_group.compartment_id,
"description": log_group.additional_details['description'],
"display_name": log_group.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, log_group.display_name),
"id": log_group.identifier,
"time_created": log_group.time_created.strftime(self.__iso_time_format),
# "time_last_modified": str(log_group.time_last_modified),
"defined_tags": log_group.defined_tags,
"freeform_tags": log_group.freeform_tags,
"region": region_key,
"logs": [],
"notes" : ""
}
try:
logs = oci.pagination.list_call_get_all_results(
region_values['logging_client'].list_logs,
log_group_id=log_group.identifier
).data
for log in logs:
deep_link = self.__oci_loggroup_uri + log_group.identifier + "/logs/" + log.id + '?region=' + region_key
log_record = {
"compartment_id": log.compartment_id,
"display_name": log.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, log.display_name),
"id": log.id,
"is_enabled": log.is_enabled,
"lifecycle_state": log.lifecycle_state,
"log_group_id": log.log_group_id,
"log_type": log.log_type,
"retention_duration": log.retention_duration,
"time_created": log.time_created.strftime(self.__iso_time_format),
"time_last_modified": str(log.time_last_modified),
"defined_tags": log.defined_tags,
"freeform_tags": log.freeform_tags
}
try:
if log.configuration:
log_record["configuration_compartment_id"] = log.configuration.compartment_id,
log_record["source_category"] = log.configuration.source.category,
log_record["source_parameters"] = log.configuration.source.parameters,
log_record["source_resource"] = log.configuration.source.resource,
log_record["source_service"] = log.configuration.source.service,
log_record["source_source_type"] = log.configuration.source.source_type
log_record["archiving_enabled"] = log.configuration.archiving.is_enabled
if log.configuration.source.service == 'flowlogs':
self.__subnet_logs[log.configuration.source.resource] = {"log_group_id": log.log_group_id, "log_id": log.id}
elif log.configuration.source.service == 'objectstorage' and 'write' in log.configuration.source.category:
# Only write logs
self.__write_bucket_logs[log.configuration.source.resource] = {"log_group_id": log.log_group_id, "log_id": log.id, "region": region_key}
elif log.configuration.source.service == 'objectstorage' and 'read' in log.configuration.source.category:
# Only read logs
self.__read_bucket_logs[log.configuration.source.resource] = {"log_group_id": log.log_group_id, "log_id": log.id, "region": region_key}
elif log.configuration.source.service == 'loadbalancer' and 'error' in log.configuration.source.category:
self.__load_balancer_error_logs.append(
log.configuration.source.resource)
elif log.configuration.source.service == 'loadbalancer' and 'access' in log.configuration.source.category:
self.__load_balancer_access_logs.append(
log.configuration.source.resource)
elif log.configuration.source.service == 'apigateway' and 'access' in log.configuration.source.category:
self.__api_gateway_access_logs.append(
log.configuration.source.resource)
elif log.configuration.source.service == 'apigateway' and 'error' in log.configuration.source.category:
self.__api_gateway_error_logs.append(
log.configuration.source.resource)
except Exception as e:
self.__errors.append({"id" : log.id, "error" : str(e)})
# Append Log to log List
record['logs'].append(log_record)
except Exception as e:
self.__errors.append({"id" : log_group.identifier, "error" : str(e) })
record['notes'] = str(e)
self.__logging_list.append(record)
print("\tProcessed " + str(len(self.__logging_list)) + " Log Group Logs")
return self.__logging_list
except Exception as e:
raise RuntimeError(
"Error in __logging_read_log_groups_and_logs " + str(e.args))
##########################################################################
# Vault Keys
##########################################################################
def __vault_read_vaults(self):
self.__vaults = []
try:
for region_key, region_values in self.__regions.items():
keys_data = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query Key resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
vaults_data = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query Vault resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
# Get all Vaults in a compartment
for vlt in vaults_data:
deep_link = self.__oci_vault_uri + vlt.identifier + '?region=' + region_key
vault_record = {
"compartment_id": vlt.compartment_id,
# "crypto_endpoint": vlt.crypto_endpoint,
"display_name": vlt.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, vlt.display_name),
"id": vlt.identifier,
"lifecycle_state": vlt.lifecycle_state,
# "management_endpoint": vlt.management_endpoint,
"time_created": vlt.time_created.strftime(self.__iso_time_format),
"vault_type": vlt.additional_details['vaultType'],
"freeform_tags": vlt.freeform_tags,
"defined_tags": vlt.defined_tags,
"region": region_key,
"keys": []
}
for key in keys_data:
if vlt.identifier == key.additional_details['vaultId']:
deep_link = self.__oci_vault_uri + vlt.identifier + "/vaults/" + key.identifier + '?region=' + region_key
key_record = {
"id": key.identifier,
"display_name": key.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, key.display_name),
"compartment_id": key.compartment_id,
"lifecycle_state": key.lifecycle_state,
"time_created": key.time_created.strftime(self.__iso_time_format),
}
vault_record['keys'].append(key_record)
self.__vaults.append(vault_record)
print("\tProcessed " + str(len(self.__vaults)) + " Vaults")
return self.__vaults
except Exception as e:
raise RuntimeError(
"Error in __vault_read_vaults " + str(e.args))
##########################################################################
# OCI Budgets
##########################################################################
def __budget_read_budgets(self):
try:
# Getting all budgets in tenancy of any type
budgets_data = oci.pagination.list_call_get_all_results(
self.__regions[self.__home_region]['budget_client'].list_budgets,
compartment_id=self.__tenancy.id,
target_type="ALL"
).data
# Looping through Budgets to to get records
for budget in budgets_data:
try:
alerts_data = oci.pagination.list_call_get_all_results(
self.__regions[self.__home_region]['budget_client'].list_alert_rules,
budget_id=budget.id,
).data
except Exception:
print("\tFailed to get Budget Data for Budget Name: " + budget.display_name + " id: " + budget.id)
alerts_data = []
deep_link = self.__oci_budget_uri + budget.id
record = {
"actual_spend": budget.actual_spend,
"alert_rule_count": budget.alert_rule_count,
"amount": budget.amount,
"budget_processing_period_start_offset": budget.budget_processing_period_start_offset,
"compartment_id": budget.compartment_id,
"description": budget.description,
"display_name": budget.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, budget.display_name),
"id": budget.id,
"lifecycle_state": budget.lifecycle_state,
"processing_period_type": budget.processing_period_type,
"reset_period": budget.reset_period,
"target_compartment_id": budget.target_compartment_id,
"target_type": budget.target_type,
"tagerts": budget.targets,
"time_created": budget.time_created.strftime(self.__iso_time_format),
"time_spend_computed": str(budget.time_spend_computed),
"alerts": []
}
for alert in alerts_data:
record['alerts'].append(alert)
# Append Budget to list of Budgets
self.__budgets.append(record)
print("\tProcessed " + str(len(self.__budgets)) + " Budgets")
return self.__budgets
except Exception as e:
raise RuntimeError(
"Error in __budget_read_budgets " + str(e.args))
##########################################################################
# Audit Configuration
##########################################################################
def __audit_read_tenancy_audit_configuration(self):
# Pulling the Audit Configuration
try:
self.__audit_retention_period = self.__regions[self.__home_region]['audit_client'].get_configuration(
self.__tenancy.id).data.retention_period_days
except Exception as e:
if "NotAuthorizedOrNotFound" in str(e):
self.__audit_retention_period = -1
print("\t*** Access to audit retention requires the user to be part of the Administrator group ***")
self.__errors.append({"id" : self.__tenancy.id, "error" : "*** Access to audit retention requires the user to be part of the Administrator group ***"})
else:
raise RuntimeError("Error in __audit_read_tenancy_audit_configuration " + str(e.args))
print("\tProcessed Audit Configuration.")
return self.__audit_retention_period
##########################################################################
# Cloud Guard Configuration
##########################################################################
def __cloud_guard_read_cloud_guard_configuration(self):
try:
self.__cloud_guard_config = self.__regions[self.__home_region]['cloud_guard_client'].get_configuration(
self.__tenancy.id).data
debug("__cloud_guard_read_cloud_guard_configuration Cloud Guard Configuration is: " + str(self.__cloud_guard_config))
self.__cloud_guard_config_status = self.__cloud_guard_config.status
print("\tProcessed Cloud Guard Configuration.")
return self.__cloud_guard_config_status
except Exception:
self.__cloud_guard_config_status = 'DISABLED'
print("*** Cloud Guard service requires a PayGo account ***")
##########################################################################
# Cloud Guard Configuration
##########################################################################
def __cloud_guard_read_cloud_guard_targets(self):
if self.__cloud_guard_config_status == "ENABLED":
cloud_guard_targets = 0
try:
for compartment in self.__compartments:
if self.__if_not_managed_paas_compartment(compartment.name):
# Getting a compartments target
cg_targets = self.__regions[self.__cloud_guard_config.reporting_region]['cloud_guard_client'].list_targets(
compartment_id=compartment.id).data.items
debug("__cloud_guard_read_cloud_guard_targets: " + str(cg_targets) )
# Looping throufh targets to get target data
for target in cg_targets:
try:
# Getting Target data like recipes
try:
target_data = self.__regions[self.__cloud_guard_config.reporting_region]['cloud_guard_client'].get_target(
target_id=target.id
).data
except Exception:
target_data = None
deep_link = self.__oci_cgtarget_uri + target.id
record = {
"compartment_id": target.compartment_id,
"defined_tags": target.defined_tags,
"display_name": target.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, target.display_name),
"freeform_tags": target.freeform_tags,
"id": target.id,
"lifecycle_state": target.lifecycle_state,
"lifecyle_details": target.lifecyle_details,
"system_tags": target.system_tags,
"recipe_count": target.recipe_count,
"target_resource_id": target.target_resource_id,
"target_resource_type": target.target_resource_type,
"time_created": target.time_created.strftime(self.__iso_time_format),
"time_updated": str(target.time_updated),
"inherited_by_compartments": target_data.inherited_by_compartments if target_data else "",
"description": target_data.description if target_data else "",
"target_details": target_data.target_details if target_data else "",
"target_detector_recipes": target_data.target_detector_recipes if target_data else "",
"target_responder_recipes": target_data.target_responder_recipes if target_data else ""
}
# Indexing by compartment_id
self.__cloud_guard_targets[compartment.id] = record
cloud_guard_targets += 1
except Exception:
print("\t Failed to Cloud Guard Target Data for: " + target.display_name + " id: " + target.id)
self.__errors.append({"id" : target.id, "error" : "Failed to Cloud Guard Target Data for: " + target.display_name + " id: " + target.id })
print("\tProcessed " + str(cloud_guard_targets) + " Cloud Guard Targets")
return self.__cloud_guard_targets
except Exception as e:
print("*** Cloud Guard service requires a PayGo account ***")
self.__errors.append({"id" : self.__tenancy.id, "error" : "Cloud Guard service requires a PayGo account. Error is: " + str(e)})
##########################################################################
# Identity Password Policy
##########################################################################
def __identity_read_tenancy_password_policy(self):
try:
self.__tenancy_password_policy = self.__regions[self.__home_region]['identity_client'].get_authentication_policy(
self.__tenancy.id
).data
print("\tProcessed Tenancy Password Policy...")
return self.__tenancy_password_policy
except Exception as e:
if "NotAuthorizedOrNotFound" in str(e):
self.__tenancy_password_policy = None
print("\t*** Access to password policies in this tenancy requires elevated permissions. ***")
self.__errors.append({"id" : self.__tenancy.id, "error" : "*** Access to password policies in this tenancy requires elevated permissions. ***"})
else:
raise RuntimeError("Error in __identity_read_tenancy_password_policy " + str(e.args))
##########################################################################
# Oracle Notifications Services for Subscriptions
##########################################################################
def __ons_read_subscriptions(self):
try:
for region_key, region_values in self.__regions.items():
# Iterate through compartments to get all subscriptions
subs_data = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query OnsSubscription resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
for sub in subs_data:
deep_link = self.__oci_onssub_uri + sub.identifier + '?region=' + region_key
record = {
"id": sub.identifier,
"deep_link": self.__generate_csv_hyperlink(deep_link, sub.identifier),
"compartment_id": sub.compartment_id,
# "created_time": sub.created_time, # this is an INT
"created_time": sub.time_created,
"endpoint": sub.additional_details['endpoint'],
"protocol": sub.additional_details['protocol'],
"topic_id": sub.additional_details['topicId'],
"lifecycle_state": sub.lifecycle_state,
"defined_tags": sub.defined_tags,
"freeform_tags": sub.freeform_tags,
"region": region_key
}
self.__subscriptions.append(record)
print("\tProcessed " + str(len(self.__subscriptions)) + " Subscriptions")
return self.__subscriptions
except Exception as e:
raise RuntimeError("Error in ons_read_subscription " + str(e.args))
##########################################################################
# Identity Tag Default
##########################################################################
def __identity_read_tag_defaults(self):
try:
# Getting Tag Default for the Root Compartment - Only
tag_defaults = oci.pagination.list_call_get_all_results(
self.__regions[self.__home_region]['identity_client'].list_tag_defaults,
compartment_id=self.__tenancy.id
).data
for tag in tag_defaults:
deep_link = self.__oci_compartment_uri + tag.compartment_id + "/tag-defaults"
record = {
"id": tag.id,
"compartment_id": tag.compartment_id,
"value": tag.value,
"deep_link": self.__generate_csv_hyperlink(deep_link, tag.value),
"time_created": tag.time_created.strftime(self.__iso_time_format),
"tag_definition_id": tag.tag_definition_id,
"tag_definition_name": tag.tag_definition_name,
"tag_namespace_id": tag.tag_namespace_id,
"lifecycle_state": tag.lifecycle_state
}
self.__tag_defaults.append(record)
print("\tProcessed " + str(len(self.__tag_defaults)) + " Tag Defaults")
return self.__tag_defaults
except Exception as e:
raise RuntimeError(
"Error in __identity_read_tag_defaults " + str(e.args))
##########################################################################
# Get Service Connectors
##########################################################################
def __sch_read_service_connectors(self):
try:
# looping through regions
for region_key, region_values in self.__regions.items():
# Collecting Service Connectors from each compartment
service_connectors_data = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=oci.resource_search.models.StructuredSearchDetails(
query="query ServiceConnector resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
).data
# Getting Bucket Info
for connector in service_connectors_data:
deep_link = self.__oci_serviceconnector_uri + connector.identifier + "/logging" + '?region=' + region_key
try:
service_connector = region_values['sch_client'].get_service_connector(
service_connector_id=connector.identifier
).data
record = {
"id": service_connector.id,
"display_name": service_connector.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, service_connector.display_name),
"description": service_connector.description,
"freeform_tags": service_connector.freeform_tags,
"defined_tags": service_connector.defined_tags,
"lifecycle_state": service_connector.lifecycle_state,
# "lifecycle_details": service_connector.lifecyle_details,
"system_tags": service_connector.system_tags,
"time_created": service_connector.time_created.strftime(self.__iso_time_format),
# "time_updated": str(service_connector.time_updated),
"target_kind": service_connector.target.kind,
"log_sources": [],
"region": region_key,
"notes": ""
}
for log_source in service_connector.source.log_sources:
record['log_sources'].append({
'compartment_id': log_source.compartment_id,
'log_group_id': log_source.log_group_id,
'log_id': log_source.log_id
})
self.__service_connectors[service_connector.id] = record
except Exception as e:
record = {
"id": connector.identifier,
"display_name": connector.display_name,
"deep_link": self.__generate_csv_hyperlink(deep_link, connector.display_name),
"description": connector.additional_details['description'],
"freeform_tags": connector.freeform_tags,
"defined_tags": connector.defined_tags,
"lifecycle_state": connector.lifecycle_state,
# "lifecycle_details": connector.lifecycle_details,
"system_tags": "",
"time_created": connector.time_created.strftime(self.__iso_time_format),
# "time_updated": str(connector.time_updated),
"target_kind": "",
"log_sources": [],
"region": region_key,
"notes": str(e)
}
self.__service_connectors[connector.identifier] = record
# Returning Service Connectors
print("\tProcessed " + str(len(self.__service_connectors)) + " Service Connectors")
return self.__service_connectors
except Exception as e:
raise RuntimeError("Error in __sch_read_service_connectors " + str(e.args))
##########################################################################
# Resources in root compartment
##########################################################################
def __search_resources_in_root_compartment(self):
# query = []
# resources_in_root_data = []
# record = []
query_non_compliant = "query VCN, instance, volume, filesystem, bucket, autonomousdatabase, database, dbsystem resources where compartmentId = '" + self.__tenancy.id + "'"
query_all_resources = "query all resources where compartmentId = '" + self.__tenancy.id + "'"
# resources_in_root_data = self.__search_run_structured_query(query)
for region_key, region_values in self.__regions.items():
try:
# Searching for non compliant resources in root compartment
structured_search_query = oci.resource_search.models.StructuredSearchDetails(query=query_non_compliant)
search_results = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=structured_search_query
).data
for item in search_results:
record = {
"display_name": item.display_name,
"id": item.identifier,
"region": region_key
}
self.__resources_in_root_compartment.append(record)
# Searching for all resources in the root compartment
structured_search_all_query = oci.resource_search.models.StructuredSearchDetails(query=query_all_resources)
structured_search_all_resources = oci.pagination.list_call_get_all_results(
region_values['search_client'].search_resources,
search_details=structured_search_all_query
).data
for item in structured_search_all_resources:
# ignoring global resources like IAM
if item.identifier.split('.')[3]:
record = {
"display_name": item.display_name,
"id": item.identifier,
"region": region_key
}
self.cis_foundations_benchmark_1_2['5.2']['Total'].append(item)
except Exception as e:
raise RuntimeError(
"Error in __search_resources_in_root_compartment " + str(e.args))
print("\tProcessed " + str(len(self.__resources_in_root_compartment)) + " resources in the root compartment")
return self.__resources_in_root_compartment
##########################################################################
# Analyzes Tenancy Data for CIS Report
##########################################################################
def __report_cis_analyze_tenancy_data(self):
# 1.1 Check - Checking for policy statements that are not restricted to a service
for policy in self.__policies:
for statement in policy['statements']:
if "allow group".upper() in statement.upper() and ("to manage all-resources".upper() in statement.upper()) and policy['name'].upper() != "Tenant Admin Policy".upper():
# If there are more than manage all-resources in you don't meet this rule
self.cis_foundations_benchmark_1_2['1.1']['Status'] = False
self.cis_foundations_benchmark_1_2['1.1']['Findings'].append(policy)
break
# 1.2 Check
for policy in self.__policies:
for statement in policy['statements']:
if "allow group".upper() in statement.upper() and "to manage all-resources in tenancy".upper() in statement.upper() and policy['name'].upper() != "Tenant Admin Policy".upper():
self.cis_foundations_benchmark_1_2['1.2']['Status'] = False
self.cis_foundations_benchmark_1_2['1.2']['Findings'].append(
policy)
# 1.3 Check - May want to add a service check
for policy in self.__policies:
if policy['name'].upper() != "Tenant Admin Policy".upper() and policy['name'].upper() != "PSM-root-policy":
for statement in policy['statements']:
if ("allow group".upper() in statement.upper() and "tenancy".upper() in statement.upper() and ("to manage ".upper() in statement.upper() or "to use".upper() in statement.upper()) and ("all-resources".upper() in statement.upper() or (" groups ".upper() in statement.upper() and " users ".upper() in statement.upper()))):
split_statement = statement.split("where")
# Checking if there is a where clause
if len(split_statement) == 2:
# If there is a where clause remove whitespace and quotes
clean_where_clause = split_statement[1].upper().replace(" ", "").replace("'", "")
if all(permission.upper() in clean_where_clause for permission in self.cis_iam_checks['1.3']["targets"]):
pass
else:
self.cis_foundations_benchmark_1_2['1.3']['Findings'].append(policy)
self.cis_foundations_benchmark_1_2['1.3']['Status'] = False
else:
self.cis_foundations_benchmark_1_2['1.3']['Findings'].append(policy)
self.cis_foundations_benchmark_1_2['1.3']['Status'] = False
# CIS Total 1.1,1,2,1.3 Adding - All IAM Policies for to CIS Total
self.cis_foundations_benchmark_1_2['1.1']['Total'] = self.__policies
self.cis_foundations_benchmark_1_2['1.2']['Total'] = self.__policies
self.cis_foundations_benchmark_1_2['1.3']['Total'] = self.__policies
# 1.4 Check - Password Policy - Only in home region
if self.__tenancy_password_policy:
if self.__tenancy_password_policy.password_policy.is_lowercase_characters_required:
self.cis_foundations_benchmark_1_2['1.4']['Status'] = True
else:
self.cis_foundations_benchmark_1_2['1.4']['Status'] = None
# 1.5 and 1.6 Checking Identity Domains Password Policy for expiry less than 365 and
debug("__report_cis_analyze_tenancy_data: Identity Domains Enabled is: " + str(self.__identity_domains_enabled))
if self.__identity_domains_enabled:
for domain in self.__identity_domains:
if domain['password_policy']:
debug("Policy " + domain['display_name'] + " password expiry is " + str(domain['password_policy']['password_expires_after']))
debug("Policy " + domain['display_name'] + " reuse is " + str(domain['password_policy']['num_passwords_in_history']))
if domain['password_policy']['password_expires_after']:
if domain['password_policy']['password_expires_after'] > 365:
self.cis_foundations_benchmark_1_2['1.5']['Findings'].append(domain)
if domain['password_policy']['num_passwords_in_history']:
if domain['password_policy']['num_passwords_in_history'] < 24:
self.cis_foundations_benchmark_1_2['1.6']['Findings'].append(domain)
else:
debug("__report_cis_analyze_tenancy_data 1.5 and 1.6 no password policy")
self.cis_foundations_benchmark_1_2['1.5']['Findings'].append(domain)
self.cis_foundations_benchmark_1_2['1.6']['Findings'].append(domain)
if self.cis_foundations_benchmark_1_2['1.5']['Findings']:
self.cis_foundations_benchmark_1_2['1.5']['Status'] = False
else:
self.cis_foundations_benchmark_1_2['1.5']['Status'] = True
if self.cis_foundations_benchmark_1_2['1.6']['Findings']:
self.cis_foundations_benchmark_1_2['1.6']['Status'] = False
else:
self.cis_foundations_benchmark_1_2['1.6']['Status'] = True
# 1.7 Check - Local Users w/o MFA
for user in self.__users:
if user['identity_provider_id'] is None and user['can_use_console_password'] and not (user['is_mfa_activated']) and user['lifecycle_state'] == 'ACTIVE':
self.cis_foundations_benchmark_1_2['1.7']['Status'] = False
self.cis_foundations_benchmark_1_2['1.7']['Findings'].append(
user)
# CIS Total 1.7 Adding - All Users to CIS Total
self.cis_foundations_benchmark_1_2['1.7']['Total'] = self.__users
# 1.8 Check - API Keys over 90
for user in self.__users:
if user['api_keys']:
for key in user['api_keys']:
if self.api_key_time_max_datetime >= datetime.datetime.strptime(key['time_created'], self.__iso_time_format) and key['lifecycle_state'] == 'ACTIVE':
self.cis_foundations_benchmark_1_2['1.8']['Status'] = False
finding = {
"user_name": user['name'],
"user_id": user['id'],
"key_id": key['id'],
'fingerprint': key['fingerprint'],
'inactive_status': key['inactive_status'],
'lifecycle_state': key['lifecycle_state'],
'time_created': key['time_created']
}
self.cis_foundations_benchmark_1_2['1.8']['Findings'].append(
finding)
# CIS Total 1.8 Adding - Customer Secrets to CIS Total
self.cis_foundations_benchmark_1_2['1.8']['Total'].append(key)
# CIS 1.9 Check - Old Customer Secrets
for user in self.__users:
if user['customer_secret_keys']:
for key in user['customer_secret_keys']:
if self.api_key_time_max_datetime >= datetime.datetime.strptime(key['time_created'], self.__iso_time_format) and key['lifecycle_state'] == 'ACTIVE':
self.cis_foundations_benchmark_1_2['1.9']['Status'] = False
finding = {
"user_name": user['name'],
"user_id": user['id'],
"id": key['id'],
'display_name': key['display_name'],
'inactive_status': key['inactive_status'],
'lifecycle_state': key['lifecycle_state'],
'time_created': key['time_created'],
'time_expires': key['time_expires'],
}
self.cis_foundations_benchmark_1_2['1.9']['Findings'].append(
finding)
# CIS Total 1.9 Adding - Customer Secrets to CIS Total
self.cis_foundations_benchmark_1_2['1.9']['Total'].append(key)
# CIS 1.10 Check - Old Auth Tokens
for user in self.__users:
if user['auth_tokens']:
for key in user['auth_tokens']:
if self.api_key_time_max_datetime >= datetime.datetime.strptime(key['time_created'], self.__iso_time_format) and key['lifecycle_state'] == 'ACTIVE':
self.cis_foundations_benchmark_1_2['1.10']['Status'] = False
finding = {
"user_name": user['name'],
"user_id": user['id'],
"id": key['id'],
"description": key['description'],
"inactive_status": key['inactive_status'],
"lifecycle_state": key['lifecycle_state'],
"time_created": key['time_created'],
"time_expires": key['time_expires'],
"token": key['token']
}
self.cis_foundations_benchmark_1_2['1.10']['Findings'].append(
finding)
# CIS Total 1.10 Adding - Keys to CIS Total
self.cis_foundations_benchmark_1_2['1.10']['Total'].append(
key)
# CIS 1.11 Active Admins with API keys
# Iterating through all users to see if they have API Keys and if they are active users
for user in self.__users:
if 'Administrators' in user['groups'] and user['api_keys'] and user['lifecycle_state'] == 'ACTIVE':
self.cis_foundations_benchmark_1_2['1.11']['Status'] = False
self.cis_foundations_benchmark_1_2['1.11']['Findings'].append(
user)
# CIS Total 1.12 Adding - All IAM Users in Administrator group to CIS Total
if 'Administrators' in user['groups'] and user['lifecycle_state'] == 'ACTIVE':
self.cis_foundations_benchmark_1_2['1.11']['Total'].append(user)
# CIS 1.12 Check - This check is complete uses email verification
# Iterating through all users to see if they have API Keys and if they are active users
for user in self.__users:
if user['external_identifier'] is None and user['lifecycle_state'] == 'ACTIVE' and not (user['email_verified']):
self.cis_foundations_benchmark_1_2['1.12']['Status'] = False
self.cis_foundations_benchmark_1_2['1.12']['Findings'].append(
user)
# CIS Total 1.12 Adding - All IAM Users for to CIS Total
self.cis_foundations_benchmark_1_2['1.12']['Total'] = self.__users
# CIS 1.13 Check - Ensure Dynamic Groups are used for OCI instances, OCI Cloud Databases and OCI Function to access OCI resources
# Iterating through all dynamic groups ensure there are some for fnfunc, instance or autonomous. Using reverse logic so starts as a false
for dynamic_group in self.__dynamic_groups:
if any(oci_resource.upper() in str(dynamic_group['matching_rule'].upper()) for oci_resource in self.cis_iam_checks['1.13']['resources']):
self.cis_foundations_benchmark_1_2['1.13']['Status'] = True
else:
self.cis_foundations_benchmark_1_2['1.13']['Findings'].append(
dynamic_group)
# Clearing finding
if self.cis_foundations_benchmark_1_2['1.13']['Status']:
self.cis_foundations_benchmark_1_2['1.13']['Findings'] = []
# CIS Total 1.13 Adding - All Dynamic Groups for to CIS Total
self.cis_foundations_benchmark_1_2['1.13']['Total'] = self.__dynamic_groups
# CIS 1.14 Check - Ensure storage service-level admins cannot delete resources they manage.
# Iterating through all policies
for policy in self.__policies:
if policy['name'].upper() != "Tenant Admin Policy".upper() and policy['name'].upper() != "PSM-root-policy":
for statement in policy['statements']:
for resource in self.cis_iam_checks['1.14']:
if "allow group".upper() in statement.upper() and "manage".upper() in statement.upper() and resource.upper() in statement.upper():
split_statement = statement.split("where")
if len(split_statement) == 2:
clean_where_clause = split_statement[1].upper().replace(" ", "").replace("'", "")
if all(permission.upper() in clean_where_clause for permission in self.cis_iam_checks['1.14'][resource]) and not(all(permission.upper() in clean_where_clause for permission in self.cis_iam_checks['1.14-storage-admin'][resource])):
debug("__report_cis_analyze_tenancy_data no permissions to delete storage : " + str(policy['name']))
pass
# Checking if this is the Storage admin with allowed
elif all(permission.upper() in clean_where_clause for permission in self.cis_iam_checks['1.14-storage-admin'][resource]) and not(all(permission.upper() in clean_where_clause for permission in self.cis_iam_checks['1.14'][resource])):
debug("__report_cis_analyze_tenancy_data storage admin policy is : " + str(policy['name']))
pass
else:
self.cis_foundations_benchmark_1_2['1.14']['Findings'].append(policy)
debug("__report_cis_analyze_tenancy_data else policy is /n: " + str(policy['name']))
else:
self.cis_foundations_benchmark_1_2['1.14']['Findings'].append(policy)
if self.cis_foundations_benchmark_1_2['1.14']['Findings']:
self.cis_foundations_benchmark_1_2['1.14']['Status'] = False
else:
self.cis_foundations_benchmark_1_2['1.14']['Status'] = True
# CIS Total 1.14 Adding - All IAM Policies for to CIS Total
self.cis_foundations_benchmark_1_2['1.14']['Total'] = self.__policies
# CIS 2.1, 2.2, & 2.5 Check - Security List Ingress from 0.0.0.0/0 on ports 22, 3389
for sl in self.__network_security_lists:
for irule in sl['ingress_security_rules']:
if irule['source'] == "0.0.0.0/0" and irule['protocol'] == '6':
if irule['tcp_options'] and irule['tcp_options']['destinationPortRange']:
port_min = irule['tcp_options']['destinationPortRange']['min']
port_max = irule['tcp_options']['destinationPortRange']['max']
ports_range = range(port_min, port_max + 1)
if 22 in ports_range:
self.cis_foundations_benchmark_1_2['2.1']['Status'] = False
self.cis_foundations_benchmark_1_2['2.1']['Findings'].append(sl)
if 3389 in ports_range:
self.cis_foundations_benchmark_1_2['2.2']['Status'] = False
self.cis_foundations_benchmark_1_2['2.2']['Findings'].append(sl)
break
else:
# If TCP Options is null it includes all ports
self.cis_foundations_benchmark_1_2['2.1']['Status'] = False
self.cis_foundations_benchmark_1_2['2.1']['Findings'].append(sl)
self.cis_foundations_benchmark_1_2['2.2']['Status'] = False
self.cis_foundations_benchmark_1_2['2.2']['Findings'].append(sl)
break
elif irule['source'] == "0.0.0.0/0" and irule['protocol'] == 'all':
# All Protocols allowed included TCP and all ports
self.cis_foundations_benchmark_1_2['2.1']['Status'] = False
self.cis_foundations_benchmark_1_2['2.1']['Findings'].append(sl)
self.cis_foundations_benchmark_1_2['2.2']['Status'] = False
self.cis_foundations_benchmark_1_2['2.2']['Findings'].append(sl)
break
# CIS Total 2.1, 2.2 Adding - All SLs for to CIS Total
self.cis_foundations_benchmark_1_2['2.1']['Total'] = self.__network_security_lists
self.cis_foundations_benchmark_1_2['2.2']['Total'] = self.__network_security_lists
# CIS 2.5 Check - any rule with 0.0.0.0 where protocol not 1 (ICMP)
# CIS Total 2.5 Adding - All Default Security List for to CIS Total
for sl in self.__network_security_lists:
if sl['display_name'].startswith("Default Security List for "):
self.cis_foundations_benchmark_1_2['2.5']['Total'].append(sl)
for irule in sl['ingress_security_rules']:
if irule['source'] == "0.0.0.0/0" and irule['protocol'] != '1':
self.cis_foundations_benchmark_1_2['2.5']['Status'] = False
self.cis_foundations_benchmark_1_2['2.5']['Findings'].append(
sl)
break
# CIS 2.3 and 2.4 Check - Network Security Groups Ingress from 0.0.0.0/0 on ports 22, 3389
for nsg in self.__network_security_groups:
for rule in nsg['rules']:
if rule['source'] == "0.0.0.0/0" and rule['protocol'] == '6':
if rule['tcp_options'] and rule['tcp_options'].destination_port_range:
port_min = rule['tcp_options'].destination_port_range.min
port_max = rule['tcp_options'].destination_port_range.max
ports_range = range(port_min, port_max + 1)
if 22 in ports_range:
self.cis_foundations_benchmark_1_2['2.3']['Status'] = False
self.cis_foundations_benchmark_1_2['2.3']['Findings'].append(
nsg)
if 3389 in ports_range:
self.cis_foundations_benchmark_1_2['2.4']['Status'] = False
self.cis_foundations_benchmark_1_2['2.4']['Findings'].append(nsg)
break
else:
# If TCP Options is null it includes all ports
self.cis_foundations_benchmark_1_2['2.3']['Status'] = False
self.cis_foundations_benchmark_1_2['2.3']['Findings'].append(nsg)
self.cis_foundations_benchmark_1_2['2.4']['Status'] = False
self.cis_foundations_benchmark_1_2['2.4']['Findings'].append(nsg)
break
elif rule['source'] == "0.0.0.0/0" and rule['protocol'] == 'all':
# All Protocols allowed included TCP and all ports
self.cis_foundations_benchmark_1_2['2.3']['Status'] = False
self.cis_foundations_benchmark_1_2['2.3']['Findings'].append(nsg)
self.cis_foundations_benchmark_1_2['2.4']['Status'] = False
self.cis_foundations_benchmark_1_2['2.4']['Findings'].append(nsg)
break
# CIS Total 2.2 & 2.4 Adding - All NSGs Instances to CIS Total
self.cis_foundations_benchmark_1_2['2.3']['Total'] = self.__network_security_groups
self.cis_foundations_benchmark_1_2['2.4']['Total'] = self.__network_security_groups
# CIS 2.6 - Ensure Oracle Integration Cloud (OIC) access is restricted to allowed sources
# Iterating through OIC instance have network access rules and ensure 0.0.0.0/0 is not in the list
for integration_instance in self.__integration_instances:
if not (integration_instance['network_endpoint_details']):
self.cis_foundations_benchmark_1_2['2.6']['Status'] = False
self.cis_foundations_benchmark_1_2['2.6']['Findings'].append(
integration_instance)
elif integration_instance['network_endpoint_details']:
if "0.0.0.0/0" in str(integration_instance['network_endpoint_details']):
self.cis_foundations_benchmark_1_2['2.6']['Status'] = False
self.cis_foundations_benchmark_1_2['2.6']['Findings'].append(
integration_instance)
# CIS Total 2.6 Adding - All OIC Instances to CIS Total
self.cis_foundations_benchmark_1_2['2.6']['Total'] = self.__integration_instances
# CIS 2.7 - Ensure Oracle Analytics Cloud (OAC) access is restricted to allowed sources or deployed within a VCN
for analytics_instance in self.__analytics_instances:
if analytics_instance['network_endpoint_type'].upper() == 'PUBLIC':
if not (analytics_instance['network_endpoint_details'].whitelisted_ips):
self.cis_foundations_benchmark_1_2['2.7']['Status'] = False
self.cis_foundations_benchmark_1_2['2.7']['Findings'].append(analytics_instance)
elif "0.0.0.0/0" in analytics_instance['network_endpoint_details'].whitelisted_ips:
self.cis_foundations_benchmark_1_2['2.7']['Status'] = False
self.cis_foundations_benchmark_1_2['2.7']['Findings'].append(
analytics_instance)
# CIS Total 2.7 Adding - All OAC Instances to CIS Total
self.cis_foundations_benchmark_1_2['2.7']['Total'] = self.__analytics_instances
# CIS 2.8 Check - Ensure Oracle Autonomous Shared Databases (ADB) access is restricted to allowed sources or deployed within a VCN
# Iterating through ADB Checking for null NSGs, whitelisted ip or allowed IPs 0.0.0.0/0
# Issue 295 fixed
for autonomous_database in self.__autonomous_databases:
if autonomous_database['lifecycle_state'] not in [ oci.database.models.AutonomousDatabaseSummary.LIFECYCLE_STATE_TERMINATED, oci.database.models.AutonomousDatabaseSummary.LIFECYCLE_STATE_TERMINATING, oci.database.models.AutonomousDatabaseSummary.LIFECYCLE_STATE_UNAVAILABLE ]:
if not (autonomous_database['whitelisted_ips']) and not (autonomous_database['subnet_id']):
self.cis_foundations_benchmark_1_2['2.8']['Status'] = False
self.cis_foundations_benchmark_1_2['2.8']['Findings'].append(
autonomous_database)
elif autonomous_database['whitelisted_ips']:
for value in autonomous_database['whitelisted_ips']:
if '0.0.0.0/0' in str(autonomous_database['whitelisted_ips']):
self.cis_foundations_benchmark_1_2['2.8']['Status'] = False
self.cis_foundations_benchmark_1_2['2.8']['Findings'].append(
autonomous_database)
# CIS Total 2.8 Adding - All ADBs to CIS Total
self.cis_foundations_benchmark_1_2['2.8']['Total'] = self.__autonomous_databases
# CIS 3.1 Check - Ensure Audit log retention == 365 - Only checking in home region
if self.__audit_retention_period >= 365:
self.cis_foundations_benchmark_1_2['3.1']['Status'] = True
# CIS Check 3.2 - Check for Default Tags in Root Compartment
# Iterate through tags looking for ${iam.principal.name}
for tag in self.__tag_defaults:
if tag['value'] == "${iam.principal.name}":
self.cis_foundations_benchmark_1_2['3.2']['Status'] = True
# CIS Total 3.2 Adding - All Tag Defaults to CIS Total
self.cis_foundations_benchmark_1_2['3.2']['Total'] = self.__tag_defaults
# CIS Check 3.3 - Check for Active Notification and Subscription
if len(self.__subscriptions) > 0:
self.cis_foundations_benchmark_1_2['3.3']['Status'] = True
# CIS Check 3.2 Total - All Subscriptions to CIS Total
self.cis_foundations_benchmark_1_2['3.3']['Total'] = self.__subscriptions
# CIS Checks 3.4 - 3.13
# Iterate through all event rules
for event in self.__event_rules:
# Convert Event Condition to dict
jsonable_str = event['condition'].lower().replace("'", "\"")
try:
event_dict = json.loads(jsonable_str)
except Exception:
print("*** Invalid Event Condition for event (not in JSON format): " + event['display_name'] + " ***")
event_dict = {}
# Issue 256: 'eventtype' not in event_dict (i.e. missing in event condition)
if event_dict and 'eventtype' in event_dict:
for key, changes in self.cis_monitoring_checks.items():
# Checking if all cis change list is a subset of event condition
try:
if (all(x in event_dict['eventtype'] for x in changes)):
self.cis_foundations_benchmark_1_2[key]['Status'] = True
except Exception:
print("*** Invalid Event Data for event: " + event['display_name'] + " ***")
# CIS Check 3.14 - VCN FlowLog enable
# Generate list of subnets IDs
for subnet in self.__network_subnets:
if not (subnet['id'] in self.__subnet_logs):
self.cis_foundations_benchmark_1_2['3.14']['Status'] = False
self.cis_foundations_benchmark_1_2['3.14']['Findings'].append(
subnet)
# CIS Check 3.14 Total - Adding All Subnets to total
self.cis_foundations_benchmark_1_2['3.14']['Total'] = self.__network_subnets
# CIS Check 3.15 - Cloud Guard enabled
debug("__report_cis_analyze_tenancy_data Cloud Guard Check: " + str(self.__cloud_guard_config_status))
if self.__cloud_guard_config_status == 'ENABLED':
self.cis_foundations_benchmark_1_2['3.15']['Status'] = True
else:
self.cis_foundations_benchmark_1_2['3.15']['Status'] = False
# CIS Check 3.16 - Encryption keys over 365
# Generating list of keys
for vault in self.__vaults:
for key in vault['keys']:
if self.kms_key_time_max_datetime >= datetime.datetime.strptime(key['time_created'], self.__iso_time_format):
self.cis_foundations_benchmark_1_2['3.16']['Status'] = False
self.cis_foundations_benchmark_1_2['3.16']['Findings'].append(
key)
# CIS Check 3.16 Total - Adding Key to total
self.cis_foundations_benchmark_1_2['3.16']['Total'].append(key)
# CIS Check 3.17 - Object Storage with Logs
# Generating list of buckets names
for bucket in self.__buckets:
if not (bucket['name'] in self.__write_bucket_logs):
self.cis_foundations_benchmark_1_2['3.17']['Status'] = False
self.cis_foundations_benchmark_1_2['3.17']['Findings'].append(
bucket)
# CIS Check 3.17 Total - Adding All Buckets to total
self.cis_foundations_benchmark_1_2['3.17']['Total'] = self.__buckets
# CIS Section 4.1 Bucket Checks
# Generating list of buckets names
for bucket in self.__buckets:
if 'public_access_type' in bucket:
if bucket['public_access_type'] != 'NoPublicAccess':
self.cis_foundations_benchmark_1_2['4.1.1']['Status'] = False
self.cis_foundations_benchmark_1_2['4.1.1']['Findings'].append(
bucket)
if 'kms_key_id' in bucket:
if not (bucket['kms_key_id']):
self.cis_foundations_benchmark_1_2['4.1.2']['Findings'].append(
bucket)
self.cis_foundations_benchmark_1_2['4.1.2']['Status'] = False
if 'versioning' in bucket:
if bucket['versioning'] != "Enabled":
self.cis_foundations_benchmark_1_2['4.1.3']['Findings'].append(
bucket)
self.cis_foundations_benchmark_1_2['4.1.3']['Status'] = False
# CIS Check 4.1.1,4.1.2,4.1.3 Total - Adding All Buckets to total
self.cis_foundations_benchmark_1_2['4.1.1']['Total'] = self.__buckets
self.cis_foundations_benchmark_1_2['4.1.2']['Total'] = self.__buckets
self.cis_foundations_benchmark_1_2['4.1.3']['Total'] = self.__buckets
# CIS Section 4.2.1 Block Volume Checks
# Generating list of block volumes names
for volume in self.__block_volumes:
if 'kms_key_id' in volume:
if not (volume['kms_key_id']):
self.cis_foundations_benchmark_1_2['4.2.1']['Findings'].append(
volume)
self.cis_foundations_benchmark_1_2['4.2.1']['Status'] = False
# CIS Check 4.2.1 Total - Adding All Block Volumes to total
self.cis_foundations_benchmark_1_2['4.2.1']['Total'] = self.__block_volumes
# CIS Section 4.2.2 Boot Volume Checks
# Generating list of boot names
for boot_volume in self.__boot_volumes:
if 'kms_key_id' in boot_volume:
if not (boot_volume['kms_key_id']):
self.cis_foundations_benchmark_1_2['4.2.2']['Findings'].append(
boot_volume)
self.cis_foundations_benchmark_1_2['4.2.2']['Status'] = False
# CIS Check 4.2.2 Total - Adding All Block Volumes to total
self.cis_foundations_benchmark_1_2['4.2.2']['Total'] = self.__boot_volumes
# CIS Section 4.3.1 FSS Checks
# Generating list of FSS names
for file_system in self.__file_storage_system:
if 'kms_key_id' in file_system:
if not (file_system['kms_key_id']):
self.cis_foundations_benchmark_1_2['4.3.1']['Findings'].append(
file_system)
self.cis_foundations_benchmark_1_2['4.3.1']['Status'] = False
# CIS Check 4.3.1 Total - Adding All Block Volumes to total
self.cis_foundations_benchmark_1_2['4.3.1']['Total'] = self.__file_storage_system
# CIS Section 5 Checks
# Checking if more than one compartment because of the ManagedPaaS Compartment
if len(self.__compartments) < 2:
self.cis_foundations_benchmark_1_2['5.1']['Status'] = False
if len(self.__resources_in_root_compartment) > 0:
for item in self.__resources_in_root_compartment:
self.cis_foundations_benchmark_1_2['5.2']['Status'] = False
self.cis_foundations_benchmark_1_2['5.2']['Findings'].append(
item)
##########################################################################
# Recursive function the gets the child compartments of a compartment
##########################################################################
def __get_children(self, parent, compartments):
try:
kids = compartments[parent]
except Exception:
kids = []
if kids:
for kid in compartments[parent]:
kids = kids + self.__get_children(kid, compartments)
return kids
##########################################################################
# Analyzes Tenancy Data for Oracle Best Practices Report
##########################################################################
def __obp_analyze_tenancy_data(self):
#######################################
# Budget Checks
#######################################
# Determines if a Budget Exists with an alert rule
if len(self.__budgets) > 0:
for budget in self.__budgets:
if budget['alert_rule_count'] > 0 and budget['target_compartment_id'] == self.__tenancy.id:
self.obp_foundations_checks['Cost_Tracking_Budgets']['Status'] = True
self.obp_foundations_checks['Cost_Tracking_Budgets']['OBP'].append(budget)
else:
self.obp_foundations_checks['Cost_Tracking_Budgets']['Findings'].append(budget)
# Stores Regional Checks
for region_key, region_values in self.__regions.items():
self.__obp_regional_checks[region_key] = {
"Audit": {
"tenancy_level_audit": False,
"tenancy_level_include_sub_comps": False,
"compartments": [],
"findings": []
},
"VCN": {
"subnets": [],
"findings": []
},
"Write_Bucket": {
"buckets": [],
"findings": []
},
"Read_Bucket": {
"buckets": [],
"findings": []
},
"Network_Connectivity": {
"drgs": [],
"findings": [],
"status": False
},
}
#######################################
# OCI Audit Log Compartments Checks
#######################################
list_of_all_compartments = []
dict_of_compartments = {}
for compartment in self.__compartments:
list_of_all_compartments.append(compartment.id)
# Building a Hash Table of Parent Child Hieracrchy for Audit
dict_of_compartments = {}
for compartment in self.__compartments:
if "tenancy" not in compartment.id:
try:
dict_of_compartments[compartment.compartment_id].append(compartment.id)
except Exception:
dict_of_compartments[compartment.compartment_id] = []
dict_of_compartments[compartment.compartment_id].append(compartment.id)
# This is used for comparing compartments that are audit to the full list of compartments
set_of_all_compartments = set(list_of_all_compartments)
# Collecting Servie Connectors Logs related to compartments
for sch_id, sch_values in self.__service_connectors.items():
# Only Active SCH with a target that is configured
if sch_values['lifecycle_state'].upper() == "ACTIVE" and sch_values['target_kind']:
for source in sch_values['log_sources']:
try:
# Checking if a the compartment being logged is the Tenancy and it has all child compartments
if source['compartment_id'] == self.__tenancy.id and source['log_group_id'].upper() == "_Audit_Include_Subcompartment".upper():
self.__obp_regional_checks[sch_values['region']]['Audit']['tenancy_level_audit'] = True
self.__obp_regional_checks[sch_values['region']]['Audit']['tenancy_level_include_sub_comps'] = True
# Since it is not the Tenancy we should add the compartment to the list and check if sub compartment are included
elif source['log_group_id'].upper() == "_Audit_Include_Subcompartment".upper():
self.__obp_regional_checks[sch_values['region']]['Audit']['compartments'] += self.__get_children(source['compartment_id'], dict_of_compartments)
elif source['log_group_id'].upper() == "_Audit".upper():
self.__obp_regional_checks[sch_values['region']]['Audit']['compartments'].append(source['compartment_id'])
except Exception:
# There can be empty log groups
pass
# Analyzing Service Connector Audit Logs to see if each region has all compartments
for region_key, region_values in self.__obp_regional_checks.items():
# Checking if I already found the tenancy ocid with all child compartments included
if not region_values['Audit']['tenancy_level_audit']:
audit_findings = set_of_all_compartments - set(region_values['Audit']['compartments'])
# If there are items in the then it is not auditing everything in the tenancy
if audit_findings:
region_values['Audit']['findings'] += list(audit_findings)
else:
region_values['Audit']['tenancy_level_audit'] = True
region_values['Audit']['findings'] = []
# Consolidating Audit findings into the OBP Checks
for region_key, region_values in self.__obp_regional_checks.items():
# If this flag is set all compartments are not logged in region
if not region_values['Audit']['tenancy_level_audit']:
self.obp_foundations_checks['SIEM_Audit_Log_All_Comps']['Status'] = False
# If this flag is set the region has the tenancy logging and all sub compartments flag checked
if not region_values['Audit']['tenancy_level_include_sub_comps']:
self.obp_foundations_checks['SIEM_Audit_Incl_Sub_Comp']['Status'] = False
self.obp_foundations_checks['SIEM_Audit_Incl_Sub_Comp']['Findings'].append({"region_name": region_key})
else:
self.obp_foundations_checks['SIEM_Audit_Incl_Sub_Comp']['OBP'].append({"region_name": region_key})
# Compartment Logs that are missed in the region
for compartment in region_values['Audit']['findings']:
try:
finding = list(filter(lambda source: source['id'] == compartment, self.__raw_compartment))[0]
record = {
"id": finding['id'],
"name": finding['name'],
"deep_link": finding['deep_link'],
"compartment_id": finding['compartment_id'],
"defined_tags": finding['defined_tags'],
"description": finding['description'],
"freeform_tags": finding['freeform_tags'],
"inactive_status": finding['inactive_status'],
"is_accessible": finding['is_accessible'],
"lifecycle_state": finding['lifecycle_state'],
"time_created": finding['time_created'],
"region": region_key
}
except Exception as e:
record = {
"id": compartment,
"name": "Compartment No Longer Exists",
"deep_link": "",
"compartment_id": "",
"defined_tags": "",
"description": str(e),
"freeform_tags": "",
"inactive_status": "",
"is_accessible": "",
"lifecycle_state": "",
"time_created": "",
"region": region_key
}
# Need to check for duplicates before adding the record
exists_already = list(filter(lambda source: source['id'] == record['id'] and source['region'] == record['region'], self.obp_foundations_checks['SIEM_Audit_Log_All_Comps']['Findings']))
if not exists_already:
self.obp_foundations_checks['SIEM_Audit_Log_All_Comps']['Findings'].append(record)
# Compartment logs that are not missed in the region
for compartment in region_values['Audit']['compartments']:
try:
finding = list(filter(lambda source: source['id'] == compartment, self.__raw_compartment))[0]
record = {
"id": finding['id'],
"name": finding['name'],
"deep_link": finding['deep_link'],
"compartment_id": finding['compartment_id'],
"defined_tags": finding['defined_tags'],
"description": finding['description'],
"freeform_tags": finding['freeform_tags'],
"inactive_status": finding['inactive_status'],
"is_accessible": finding['is_accessible'],
"lifecycle_state": finding['lifecycle_state'],
"time_created": finding['time_created'],
"region": region_key
}
except Exception as e:
record = {
"id": compartment,
"name": "Compartment No Longer Exists",
"deep_link": "",
"compartment_id": "",
"defined_tags": "",
"description": str(e),
"freeform_tags": "",
"inactive_status": "",
"is_accessible": "",
"lifecycle_state": "",
"time_created": "",
"region": region_key
}
# Need to check for duplicates before adding the record
exists_already = list(filter(lambda source: source['id'] == record['id'] and source['region'] == record['region'], self.obp_foundations_checks['SIEM_Audit_Log_All_Comps']['OBP']))
if not exists_already:
self.obp_foundations_checks['SIEM_Audit_Log_All_Comps']['OBP'].append(record)
#######################################
# Subnet and Bucket Log Checks
#######################################
for sch_id, sch_values in self.__service_connectors.items():
# Only Active SCH with a target that is configured
if sch_values['lifecycle_state'].upper() == "ACTIVE" and sch_values['target_kind']:
# Subnet Logs Checks
for subnet_id, log_values in self.__subnet_logs.items():
log_id = log_values['log_id']
log_group_id = log_values['log_group_id']
log_record = {"sch_id": sch_id, "sch_name": sch_values['display_name'], "id": subnet_id}
subnet_log_group_in_sch = list(filter(lambda source: source['log_group_id'] == log_group_id, sch_values['log_sources']))
subnet_log_in_sch = list(filter(lambda source: source['log_id'] == log_id, sch_values['log_sources']))
# Checking if the Subnets's log group in is in SCH's log sources & the log_id is empty so it covers everything in the log group
if subnet_log_group_in_sch and not (subnet_log_in_sch):
self.__obp_regional_checks[sch_values['region']]['VCN']['subnets'].append(log_record)
# Checking if the Subnet's log id in is in the service connector's log sources if so I will add it
elif subnet_log_in_sch:
self.__obp_regional_checks[sch_values['region']]['VCN']['subnets'].append(log_record)
# else:
# self.__obp_regional_checks[sch_values['region']]['VCN']['findings'].append(subnet_id)
# Bucket Write Logs Checks
for bucket_name, log_values in self.__write_bucket_logs.items():
log_id = log_values['log_id']
log_group_id = log_values['log_group_id']
log_record = {"sch_id": sch_id, "sch_name": sch_values['display_name'], "id": bucket_name}
log_region = log_values['region']
bucket_log_group_in_sch = list(filter(lambda source: source['log_group_id'] == log_group_id and sch_values['region'] == log_region, sch_values['log_sources']))
bucket_log_in_sch = list(filter(lambda source: source['log_id'] == log_id and sch_values['region'] == log_region, sch_values['log_sources']))
# Checking if the Bucket's log group in is in SCH's log sources & the log_id is empty so it covers everything in the log group
if bucket_log_group_in_sch and not (bucket_log_in_sch):
self.__obp_regional_checks[sch_values['region']]['Write_Bucket']['buckets'].append(log_record)
# Checking if the Bucket's log Group in is in the service connector's log sources if so I will add it
elif bucket_log_in_sch:
self.__obp_regional_checks[sch_values['region']]['Write_Bucket']['buckets'].append(log_record)
# else:
# self.__obp_regional_checks[sch_values['region']]['Write_Bucket']['findings'].append(bucket_name)
# Bucket Read Log Checks
for bucket_name, log_values in self.__read_bucket_logs.items():
log_id = log_values['log_id']
log_group_id = log_values['log_group_id']
log_record = {"sch_id": sch_id, "sch_name": sch_values['display_name'], "id": bucket_name}
log_region = log_values['region']
bucket_log_group_in_sch = list(filter(lambda source: source['log_group_id'] == log_group_id and sch_values['region'] == log_region, sch_values['log_sources']))
bucket_log_in_sch = list(filter(lambda source: source['log_id'] == log_id and sch_values['region'] == log_region, sch_values['log_sources']))
# Checking if the Bucket's log group in is in SCH's log sources & the log_id is empty so it covers everything in the log group
if bucket_log_group_in_sch and not (bucket_log_in_sch):
self.__obp_regional_checks[sch_values['region']]['Read_Bucket']['buckets'].append(log_record)
# Checking if the Bucket's log id in is in the service connector's log sources if so I will add it
elif bucket_log_in_sch:
self.__obp_regional_checks[sch_values['region']]['Read_Bucket']['buckets'].append(log_record)
# Consolidating regional SERVICE LOGGING findings into centralized finding report
for region_key, region_values in self.__obp_regional_checks.items():
for finding in region_values['VCN']['subnets']:
logged_subnet = list(filter(lambda subnet: subnet['id'] == finding['id'], self.__network_subnets))
# Checking that the subnet has not already been written to OBP
existing_finding = list(filter(lambda subnet: subnet['id'] == finding['id'], self.obp_foundations_checks['SIEM_VCN_Flow_Logging']['OBP']))
if len(logged_subnet) != 0:
record = logged_subnet[0].copy()
record['sch_id'] = finding['sch_id']
record['sch_name'] = finding['sch_name']
if logged_subnet and not (existing_finding):
self.obp_foundations_checks['SIEM_VCN_Flow_Logging']['OBP'].append(record)
# else:
# print("Found this subnet being logged but the subnet does not exist: " + str(finding))
for finding in region_values['Write_Bucket']['buckets']:
logged_bucket = list(filter(lambda bucket: bucket['name'] == finding['id'], self.__buckets))
if len(logged_bucket) != 0:
record = logged_bucket[0].copy()
record['sch_id'] = finding['sch_id']
record['sch_name'] = finding['sch_name']
if logged_bucket:
self.obp_foundations_checks['SIEM_Write_Bucket_Logs']['OBP'].append(record)
for finding in region_values['Read_Bucket']['buckets']:
logged_bucket = list(filter(lambda bucket: bucket['name'] == finding['id'], self.__buckets))
if len(logged_bucket) != 0:
record = logged_bucket[0].copy()
record['sch_id'] = finding['sch_id']
record['sch_name'] = finding['sch_name']
if logged_bucket:
self.obp_foundations_checks['SIEM_Read_Bucket_Logs']['OBP'].append(record)
# Finding looking at all buckets and seeing if they meet one of the OBPs in one of the regions
for finding in self.__buckets:
read_logged_bucket = list(filter(lambda bucket: bucket['name'] == finding['name'] and bucket['region'] == finding['region'], self.obp_foundations_checks['SIEM_Read_Bucket_Logs']['OBP']))
if not (read_logged_bucket):
self.obp_foundations_checks['SIEM_Read_Bucket_Logs']['Findings'].append(finding)
write_logged_bucket = list(filter(lambda bucket: bucket['name'] == finding['name'] and bucket['region'] == finding['region'], self.obp_foundations_checks['SIEM_Write_Bucket_Logs']['OBP']))
if not (write_logged_bucket):
self.obp_foundations_checks['SIEM_Write_Bucket_Logs']['Findings'].append(finding)
# Finding looking at all subnet and seeing if they meet one of the OBPs in one of the regions
for finding in self.__network_subnets:
logged_subnet = list(filter(lambda subnet: subnet['id'] == finding['id'], self.obp_foundations_checks['SIEM_VCN_Flow_Logging']['OBP']))
if not (logged_subnet):
self.obp_foundations_checks['SIEM_VCN_Flow_Logging']['Findings'].append(finding)
# Setting VCN Flow Logs Findings
if self.obp_foundations_checks['SIEM_VCN_Flow_Logging']['Findings']:
self.obp_foundations_checks['SIEM_VCN_Flow_Logging']['Status'] = False
else:
self.obp_foundations_checks['SIEM_VCN_Flow_Logging']['Status'] = True
# Setting Write Bucket Findings
if self.obp_foundations_checks['SIEM_Write_Bucket_Logs']['Findings']:
self.obp_foundations_checks['SIEM_Write_Bucket_Logs']['Status'] = False
elif not self.__service_connectors:
# If there are no service connectors then by default all buckets are not logged
self.obp_foundations_checks['SIEM_Write_Bucket_Logs']['Status'] = False
self.obp_foundations_checks['SIEM_Write_Bucket_Logs']['Findings'] += self.__buckets
else:
self.obp_foundations_checks['SIEM_Write_Bucket_Logs']['Status'] = True
# Setting Read Bucket Findings
if self.obp_foundations_checks['SIEM_Read_Bucket_Logs']['Findings']:
self.obp_foundations_checks['SIEM_Read_Bucket_Logs']['Status'] = False
elif not self.__service_connectors:
# If there are no service connectors then by default all buckets are not logged
self.obp_foundations_checks['SIEM_Read_Bucket_Logs']['Status'] = False
self.obp_foundations_checks['SIEM_Read_Bucket_Logs']['Findings'] += self.__buckets
else:
self.obp_foundations_checks['SIEM_Read_Bucket_Logs']['Status'] = True
#######################################
# OBP Networking Checks
#######################################
# Fast Connect Connections
for drg_id, drg_values in self.__network_drg_attachments.items():
number_of_valid_connected_vcns = 0
number_of_valid_fast_connect_circuits = 0
number_of_valid_site_to_site_connection = 0
fast_connect_providers = set()
customer_premises_equipment = set()
for attachment in drg_values:
if attachment['network_type'].upper() == 'VCN':
# Checking if DRG has a valid VCN attached to it
number_of_valid_connected_vcns += 1
elif attachment['network_type'].upper() == 'IPSEC_TUNNEL':
# Checking if the IPSec Connection has both tunnels up
for ipsec_connection in self.__network_ipsec_connections[drg_id]:
if ipsec_connection['tunnels_up']:
# Good IP Sec Connection increment valid site to site and track CPEs
customer_premises_equipment.add(ipsec_connection['cpe_id'])
number_of_valid_site_to_site_connection += 1
elif attachment['network_type'].upper() == 'VIRTUAL_CIRCUIT':
# Checking for Provision and BGP enabled Virtual Circuits and that it is associated
for virtual_circuit in self.__network_fastconnects[attachment['drg_id']]:
if attachment['network_id'] == virtual_circuit['id']:
if virtual_circuit['lifecycle_state'].upper() == 'PROVISIONED' and virtual_circuit['bgp_session_state'].upper() == "UP":
# Good VC to increment number of VCs and append the provider name
fast_connect_providers.add(virtual_circuit['provider_name'])
number_of_valid_fast_connect_circuits += 1
try:
record = {
"drg_id": drg_id,
"drg_display_name": self.__network_drgs[drg_id]['display_name'],
"region": self.__network_drgs[drg_id]['region'],
"number_of_connected_vcns": number_of_valid_connected_vcns,
"number_of_customer_premises_equipment": len(customer_premises_equipment),
"number_of_connected_ipsec_connections": number_of_valid_site_to_site_connection,
"number_of_fastconnects_cicruits": number_of_valid_fast_connect_circuits,
"number_of_fastconnect_providers": len(fast_connect_providers),
}
except Exception:
record = {
"drg_id": drg_id,
"drg_display_name": "Deleted with an active attachement",
"region": attachment['region'],
"number_of_connected_vcns": 0,
"number_of_customer_premises_equipment": 0,
"number_of_connected_ipsec_connections": 0,
"number_of_fastconnects_cicruits": 0,
"number_of_fastconnect_providers": 0,
}
print(f"This DRG: {drg_id} is deleted with an active attachement: {attachment['display_name']}")
# Checking if the DRG and connected resourcs are aligned with best practices
# One attached VCN, One VPN connection and one fast connect
if number_of_valid_connected_vcns and number_of_valid_site_to_site_connection and number_of_valid_fast_connect_circuits:
self.__obp_regional_checks[record['region']]["Network_Connectivity"]["drgs"].append(record)
self.__obp_regional_checks[record['region']]["Network_Connectivity"]["status"] = True
# Two VPN site to site connections to seperate CPEs
elif number_of_valid_connected_vcns and number_of_valid_site_to_site_connection and len(customer_premises_equipment) >= 2:
self.__obp_regional_checks[record['region']]["Network_Connectivity"]["drgs"].append(record)
self.__obp_regional_checks[record['region']]["Network_Connectivity"]["status"] = True
# Two FastConnects from Different providers
elif number_of_valid_connected_vcns and number_of_valid_fast_connect_circuits and len(fast_connect_providers) >= 2:
self.__obp_regional_checks[record['region']]["Network_Connectivity"]["drgs"].append(record)
self.__obp_regional_checks[record['region']]["Network_Connectivity"]["status"] = True
else:
self.__obp_regional_checks[record['region']]["Network_Connectivity"]["findings"].append(record)
# Consolidating Regional
for region_key, region_values in self.__obp_regional_checks.items():
# I assume you are well connected in all regions if find one region that is not it fails
if not region_values["Network_Connectivity"]["status"]:
self.obp_foundations_checks['Networking_Connectivity']['Status'] = False
self.obp_foundations_checks["Networking_Connectivity"]["Findings"] += region_values["Network_Connectivity"]["findings"]
self.obp_foundations_checks["Networking_Connectivity"]["OBP"] += region_values["Network_Connectivity"]["drgs"]
#######################################
# Cloud Guard Checks
#######################################
cloud_guard_record = {
"cloud_guard_endable": True if self.__cloud_guard_config_status == 'ENABLED' else False,
"target_at_root": False,
"targert_configuration_detector": False,
"targert_configuration_detector_customer_owned": False,
"target_activity_detector": False,
"target_activity_detector_customer_owned": False,
"target_threat_detector": False,
"target_threat_detector_customer_owned": False,
"target_responder_recipes": False,
"target_responder_recipes_customer_owned": False,
"target_responder_event_rule": False,
}
try:
# Cloud Guard Target attached to the root compartment with activity, config, and threat detector plus a responder
if self.__cloud_guard_targets[self.__tenancy.id]:
cloud_guard_record['target_at_root'] = True
if self.__cloud_guard_targets[self.__tenancy.id]:
if self.__cloud_guard_targets[self.__tenancy.id]['target_detector_recipes']:
for recipe in self.__cloud_guard_targets[self.__tenancy.id]['target_detector_recipes']:
if recipe.detector.upper() == 'IAAS_CONFIGURATION_DETECTOR':
cloud_guard_record['targert_configuration_detector'] = True
if recipe.owner.upper() == "CUSTOMER":
cloud_guard_record['targert_configuration_detector_customer_owned'] = True
elif recipe.detector.upper() == 'IAAS_ACTIVITY_DETECTOR':
cloud_guard_record['target_activity_detector'] = True
if recipe.owner.upper() == "CUSTOMER":
cloud_guard_record['target_activity_detector_customer_owned'] = True
elif recipe.detector.upper() == 'IAAS_THREAT_DETECTOR':
cloud_guard_record['target_threat_detector'] = True
if recipe.owner.upper() == "CUSTOMER":
cloud_guard_record['target_threat_detector_customer_owned'] = True
if self.__cloud_guard_targets[self.__tenancy.id]['target_responder_recipes']:
cloud_guard_record['target_responder_recipes'] = True
for recipe in self.__cloud_guard_targets[self.__tenancy.id]['target_responder_recipes']:
if recipe.owner.upper() == 'CUSTOMER':
cloud_guard_record['target_responder_recipes_customer_owned'] = True
for rule in recipe.effective_responder_rules:
if rule.responder_rule_id.upper() == 'EVENT' and rule.details.is_enabled:
cloud_guard_record['target_responder_event_rule'] = True
cloud_guard_record['target_id'] = self.__cloud_guard_targets[self.__tenancy.id]['id']
cloud_guard_record['target_name'] = self.__cloud_guard_targets[self.__tenancy.id]['display_name']
except Exception:
pass
all_cloud_guard_checks = True
for key, value in cloud_guard_record.items():
if not (value):
all_cloud_guard_checks = False
self.obp_foundations_checks['Cloud_Guard_Config']['Status'] = all_cloud_guard_checks
if all_cloud_guard_checks:
self.obp_foundations_checks['Cloud_Guard_Config']['OBP'].append(cloud_guard_record)
else:
self.obp_foundations_checks['Cloud_Guard_Config']['Findings'].append(cloud_guard_record)
##########################################################################
# Orchestrates data collection and CIS report generation
##########################################################################
def __report_generate_cis_report(self, level):
# This function reports generates CSV reportsffo
# Creating summary report
summary_report = []
for key, recommendation in self.cis_foundations_benchmark_1_2.items():
if recommendation['Level'] <= level:
report_filename = "cis" + " " + recommendation['section'] + "_" + recommendation['recommendation_#']
report_filename = report_filename.replace(" ", "_").replace(".", "-").replace("_-_", "_") + ".csv"
if recommendation['Status']:
compliant_output = "Yes"
elif recommendation['Status'] is None:
compliant_output = "Not Applicable"
else:
compliant_output = "No"
record = {
"Recommendation #": f"{key}",
"Section": recommendation['section'],
"Level": str(recommendation['Level']),
"Compliant": compliant_output if compliant_output != "Not Applicable" else "N/A",
"Findings": (str(len(recommendation['Findings'])) if len(recommendation['Findings']) > 0 else " "),
"Compliant Items": str(len(recommendation['Total']) - len(recommendation['Findings'])),
"Total": (str(len(recommendation['Total'])) if len(recommendation['Total']) > 0 else " "),
"Title": recommendation['Title'],
"CIS v8": recommendation['CISv8'],
"CCCS Guard Rail": recommendation['CCCS Guard Rail'],
"Filename": report_filename if len(recommendation['Findings']) > 0 else " ",
"Remediation": self.cis_report_data[key]['Remediation']
}
# Add record to summary report for CSV output
summary_report.append(record)
# Generate Findings report
# self.__print_to_csv_file("cis", recommendation['section'] + "_" + recommendation['recommendation_#'], recommendation['Findings'])
# Screen output for CIS Summary Report
print_header("CIS Foundations Benchmark 1.2 Summary Report")
print('Num' + "\t" + "Level " +
"\t" "Compliant" + "\t" + "Findings " + "\t" + "Total " + "\t\t" + 'Title')
print('#' * 90)
for finding in summary_report:
# If print_to_screen is False it will only print non-compliant findings
if not (self.__print_to_screen) and finding['Compliant'] == 'No':
print(finding['Recommendation #'] + "\t" +
finding['Level'] + "\t" + finding['Compliant'] + "\t\t" + finding['Findings'] + "\t\t" +
finding['Total'] + "\t\t" + finding['Title'])
elif self.__print_to_screen:
print(finding['Recommendation #'] + "\t" +
finding['Level'] + "\t" + finding['Compliant'] + "\t\t" + finding['Findings'] + "\t\t" +
finding['Total'] + "\t\t" + finding['Title'])
# Generating Summary report CSV
print_header("Writing CIS reports to CSV")
summary_file_name = self.__print_to_csv_file(
self.__report_directory, "cis", "summary_report", summary_report)
self.__report_generate_html_summary_report(
self.__report_directory, "cis", "html_summary_report", summary_report)
# Outputting to a bucket if I have one
if summary_file_name and self.__output_bucket:
self.__os_copy_report_to_object_storage(
self.__output_bucket, summary_file_name)
for key, recommendation in self.cis_foundations_benchmark_1_2.items():
if recommendation['Level'] <= level:
report_file_name = self.__print_to_csv_file(
self.__report_directory, "cis", recommendation['section'] + "_" + recommendation['recommendation_#'], recommendation['Findings'])
if report_file_name and self.__output_bucket:
self.__os_copy_report_to_object_storage(
self.__output_bucket, report_file_name)
##########################################################################
# Generates an HTML report
##########################################################################
def __report_generate_html_summary_report(self, report_directory, header, file_subject, data):
try:
# Creating report directory
if not os.path.isdir(report_directory):
os.mkdir(report_directory)
except Exception as e:
raise Exception(
"Error in creating report directory: " + str(e.args))
try:
# if no data
if len(data) == 0:
return None
# get the file name of the CSV
file_name = header + "_" + file_subject
file_name = (file_name.replace(" ", "_")).replace(".", "-").replace("_-_", "_") + ".html"
file_path = os.path.join(report_directory, file_name)
# add report_datetimeto each dictionary
result = [dict(item, extract_date=self.start_time_str)
for item in data]
# If this flag is set all OCIDs are Hashed to redact them
if self.__redact_output:
redacted_result = []
for item in result:
record = {}
for key in item.keys():
str_item = str(item[key])
items_to_redact = re.findall(self.__oci_ocid_pattern, str_item)
for redact_me in items_to_redact:
str_item = str_item.replace(redact_me, hashlib.sha256(str.encode(redact_me)).hexdigest())
record[key] = str_item
redacted_result.append(record)
# Overriding result with redacted result
result = redacted_result
# generate fields
fields = ['Recommendation #', 'Compliant', 'Section', 'Details']
html_title = 'CIS OCI Foundations Benchmark 1.2 - Compliance Report'
with open(file_path, mode='w') as html_file:
# Creating table header
html_file.write('')
html_file.write('' + html_title + '')
html_file.write("""
""")
html_file.write('
' + html_title.replace('-', '–') + '
')
html_file.write('
Tenancy Name: ' + self.__tenancy.name + '
')
# Get the extract date
r = result[0]
extract_date = r['extract_date'].replace('T',' ')
html_file.write('
')
# Creating HTML Table of the summary report
html_appendix = []
for row in result:
compliant = row['Compliant']
text_color = 'green'
if compliant == 'No':
continue
# Print the row
html_file.write("
")
v = row['Recommendation #']
if compliant == 'No':
html_file.write('
')
# Creating HTML Table of the summary report
html_appendix = []
for row in result:
compliant = row['Compliant']
if compliant == 'Yes':
continue
html_appendix.append(row['Recommendation #'])
text_color = 'red'
# Print the row
html_file.write("
")
v = row['Recommendation #']
if compliant == 'No':
html_file.write('
')
# Creating appendix for the report
for finding in html_appendix:
fing = self.cis_foundations_benchmark_1_2[finding]
html_file.write(f'
{finding} – {fing["Title"]}
\n')
for item_key, item_value in self.cis_report_data[finding].items():
if item_value != "":
html_file.write(f"
{item_key.title()}
")
if item_key == 'Observation':
html_file.write(f"
{str(len(fing['Findings']))} of {str(len(fing['Total']))} {item_value}
\n")
else:
v = item_value.replace('
', '
')
html_file.write(f"
{v}
\n")
html_file.write("
\n")
# Closing HTML
html_file.write("""
""")
html_file.write("
\n")
print("HTML: " + file_subject.ljust(22) + " --> " + file_path)
# Used by Upload
return file_path
except Exception as e:
raise Exception("Error in report_generate_html_report: " + str(e.args))
##########################################################################
# Orchestrates analysis and report generation
##########################################################################
def __report_generate_obp_report(self):
obp_summary_report = []
# Screen output for CIS Summary Report
print_header("OCI Best Practices Findings")
print('Category' + "\t\t\t\t" + "Compliant" + "\t" + "Findings " + "\tBest Practices")
print('#' * 90)
# Adding data to summary report
for key, recommendation in self.obp_foundations_checks.items():
padding = str(key).ljust(25, " ")
print(padding + "\t\t" + str(recommendation['Status']) + "\t" + "\t" + str(len(recommendation['Findings'])) + "\t" + "\t" + str(len(recommendation['OBP'])))
record = {
"Recommendation": str(key),
"Compliant": ('Yes' if recommendation['Status'] else 'No'),
"OBP": (str(len(recommendation['OBP'])) if len(recommendation['OBP']) > 0 else " "),
"Findings": (str(len(recommendation['Findings'])) if len(recommendation['Findings']) > 0 else " "),
"Documentation": recommendation['Documentation']
}
obp_summary_report.append(record)
print_header("Writing Oracle Best Practices reports to CSV")
summary_report_file_name = self.__print_to_csv_file(
self.__report_directory, "obp", "OBP_Summary", obp_summary_report)
if summary_report_file_name and self.__output_bucket:
self.__os_copy_report_to_object_storage(
self.__output_bucket, summary_report_file_name)
# Printing Findings to CSV
for key, value in self.obp_foundations_checks.items():
report_file_name = self.__print_to_csv_file(
self.__report_directory, "obp", key + "_Findings", value['Findings'])
# Printing OBPs to CSV
for key, value in self.obp_foundations_checks.items():
report_file_name = self.__print_to_csv_file(
self.__report_directory, "obp", key + "_Best_Practices", value['OBP'])
if report_file_name and self.__output_bucket:
self.__os_copy_report_to_object_storage(
self.__output_bucket, report_file_name)
##########################################################################
# Coordinates calls of all the read function required for analyzing tenancy
##########################################################################
def __collect_tenancy_data(self):
# Runs identity functions only in home region
thread_compartments = Thread(target=self.__identity_read_compartments)
thread_compartments.start()
thread_identity_groups = Thread(target=self.__identity_read_groups_and_membership)
thread_identity_groups.start()
thread_cloud_guard_config = Thread(target=self.__cloud_guard_read_cloud_guard_configuration)
thread_cloud_guard_config.start()
thread_compartments.join()
thread_cloud_guard_config.join()
thread_identity_groups.join()
print("\nProcessing Home Region resources...")
cis_home_region_functions = [
self.__identity_read_users,
self.__identity_read_tenancy_password_policy,
self.__identity_read_dynamic_groups,
self.__identity_read_domains,
self.__audit_read_tenancy_audit_configuration,
self.__identity_read_availability_domains,
self.__identity_read_tag_defaults,
self.__identity_read_tenancy_policies,
]
# Budgets is global construct
if self.__obp_checks:
obp_home_region_functions = [
self.__budget_read_budgets,
self.__cloud_guard_read_cloud_guard_targets
]
else:
obp_home_region_functions = []
# Threads for Home region checks
home_threads = []
for home_func in cis_home_region_functions + obp_home_region_functions:
t = Thread(target=home_func)
t.start()
home_threads.append(t)
# Waiting for home threads to complete
for t in home_threads:
t.join()
# The above checks are run in the home region
if self.__home_region not in self.__regions_to_run_in and not (self.__run_in_all_regions):
self.__regions.pop(self.__home_region)
print("\nProcessing regional resources...")
# Stores running threads
# List of functions for CIS
cis_regional_functions = [
self.__search_resources_in_root_compartment,
self.__vault_read_vaults,
self.__os_read_buckets,
self.__logging_read_log_groups_and_logs,
self.__events_read_event_rules,
self.__ons_read_subscriptions,
self.__network_read_network_security_lists,
self.__network_read_network_security_groups_rules,
self.__network_read_network_subnets,
self.__adb_read_adbs,
self.__oic_read_oics,
self.__oac_read_oacs,
self.__block_volume_read_block_volumes,
self.__boot_volume_read_boot_volumes,
self.__fss_read_fsss,
]
# Oracle Best practice functions
if self.__obp_checks:
obp_functions = [
self.__network_read_fastonnects,
self.__network_read_ip_sec_connections,
self.__network_read_drgs,
self.__network_read_drg_attachments,
self.__sch_read_service_connectors,
]
else:
obp_functions = []
def execute_function(func):
func()
with concurrent.futures.ThreadPoolExecutor(max_workers=6) as executor:
# Submit each function to the executor
futures = []
for func in cis_regional_functions + obp_functions:
futures.append(executor.submit(execute_function, func))
# Wait for all functions to complete
for future in concurrent.futures.as_completed(futures):
future.result()
##########################################################################
# Generate Raw Data Output
##########################################################################
def __report_generate_raw_data_output(self):
# List to store output reports if copying to object storage is required
list_report_file_names = []
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "identity_groups_and_membership", self.__groups_to_users)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "identity_domains", self.__identity_domains)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "identity_users", self.__users)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "identity_policies", self.__policies)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "identity_dynamic_groups", self.__dynamic_groups)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "identity_tags", self.__tag_defaults)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "identity_compartments", self.__raw_compartment)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "network_security_groups", self.__network_security_groups)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "network_security_lists", self.__network_security_lists)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "network_subnets", self.__network_subnets)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "autonomous_databases", self.__autonomous_databases)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "analytics_instances", self.__analytics_instances)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "integration_instances", self.__integration_instances)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "event_rules", self.__event_rules)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "log_groups_and_logs", self.__logging_list)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "object_storage_buckets", self.__buckets)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "boot_volumes", self.__boot_volumes)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "block_volumes", self.__block_volumes)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "file_storage_system", self.__file_storage_system)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "vaults_and_keys", self.__vaults)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "ons_subscriptions", self.__subscriptions)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "budgets", self.__budgets)
list_report_file_names.append(report_file_name)
# Converting a one to one dict to a list
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "service_connectors", list(self.__service_connectors.values()))
list_report_file_names.append(report_file_name)
# Converting a dict that is one to a list to a flat list
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "network_fastconnects", (list(itertools.chain.from_iterable(self.__network_fastconnects.values()))))
list_report_file_names.append(report_file_name)
# Converting a dict that is one to a list to a flat list
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "network_ipsec_connections", list(itertools.chain.from_iterable(self.__network_ipsec_connections.values())))
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "network_drgs", self.__raw_network_drgs)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "cloud_guard_target", list(self.__cloud_guard_targets.values()))
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "regions", self.__raw_regions)
list_report_file_names.append(report_file_name)
report_file_name = self.__print_to_csv_file(
self.__report_directory, "raw_data", "network_drg_attachments", list(itertools.chain.from_iterable(self.__network_drg_attachments.values())))
list_report_file_names.append(report_file_name)
if self.__output_bucket:
for raw_report in list_report_file_names:
if raw_report:
self.__os_copy_report_to_object_storage(
self.__output_bucket, raw_report)
##########################################################################
# Copy Report to Object Storage
##########################################################################
def __os_copy_report_to_object_storage(self, bucketname, filename):
object_name = filename
# print(self.__os_namespace)
try:
with open(filename, "rb") as f:
try:
self.__output_bucket_client.put_object(
self.__os_namespace, bucketname, object_name, f)
except Exception:
print("Failed to write " + object_name + " to bucket " + bucketname + ". Please check your bucket and IAM permissions.")
except Exception as e:
raise Exception(
"Error opening file os_copy_report_to_object_storage: " + str(e.args))
##########################################################################
# Print to CSV
##########################################################################
def __print_to_csv_file(self, report_directory, header, file_subject, data):
debug("__print_to_csv_file: " + header + "_" + file_subject)
try:
# Creating report directory
if not os.path.isdir(report_directory):
os.mkdir(report_directory)
except Exception as e:
raise Exception(
"Error in creating report directory: " + str(e.args))
try:
# if no data
if len(data) == 0:
return None
# get the file name of the CSV
file_name = header + "_" + file_subject
file_name = (file_name.replace(" ", "_")).replace(".", "-").replace("_-_", "_") + ".csv"
file_path = os.path.join(report_directory, file_name)
# add report_datetimeto each dictionary
result = [dict(item, extract_date=self.start_time_str)
for item in data]
# If this flag is set all OCIDs are Hashed to redact them
if self.__redact_output:
redacted_result = []
for item in result:
record = {}
for key in item.keys():
str_item = str(item[key])
items_to_redact = re.findall(self.__oci_ocid_pattern, str_item)
for redact_me in items_to_redact:
str_item = str_item.replace(redact_me, hashlib.sha256(str.encode(redact_me)).hexdigest())
record[key] = str_item
redacted_result.append(record)
# Overriding result with redacted result
result = redacted_result
# generate fields
fields = [key for key in result[0].keys()]
with open(file_path, mode='w', newline='') as csv_file:
writer = csv.DictWriter(csv_file, fieldnames=fields)
# write header
writer.writeheader()
for row in result:
writer.writerow(row)
# print(row)
print("CSV: " + file_subject.ljust(22) + " --> " + file_path)
# Used by Upload
return file_path
except Exception as e:
raise Exception("Error in print_to_csv_file: " + str(e.args))
##########################################################################
# Orchestrates Data collection and reports
##########################################################################
def generate_reports(self, level=2):
# Collecting all the tenancy data
self.__collect_tenancy_data()
# Analyzing Data for CIS reports
self.__report_cis_analyze_tenancy_data()
# Generate CIS reports
self.__report_generate_cis_report(level)
if self.__obp_checks:
# Analyzing Data for OBP reports
self.__obp_analyze_tenancy_data()
self.__report_generate_obp_report()
if self.__output_raw_data:
self.__report_generate_raw_data_output()
if self.__errors:
error_report = self.__print_to_csv_file(
self.__report_directory, "error", "report", self.__errors)
if self.__output_bucket:
if error_report:
self.__os_copy_report_to_object_storage(
self.__output_bucket, error_report)
end_datetime = datetime.datetime.now().replace(tzinfo=pytz.UTC)
end_time_str = str(end_datetime.strftime("%Y-%m-%dT%H:%M:%S"))
print_header("Finished at " + end_time_str + ", duration: " + str(end_datetime - self.start_datetime))
return self.__report_directory
def get_obp_checks(self):
self.__obp_checks = True
self.generate_reports()
return self.obp_foundations_checks
##########################################################################
# Create CSV Hyperlink
##########################################################################
def __generate_csv_hyperlink(self, url, name):
if len(url) < 255:
return '=HYPERLINK("' + url + '","' + name + '")'
else:
return url
##########################################################################
# check service error to warn instead of error
##########################################################################
def check_service_error(code):
return ('max retries exceeded' in str(code).lower() or
'auth' in str(code).lower() or
'notfound' in str(code).lower() or
code == 'Forbidden' or
code == 'TooManyRequests' or
code == 'IncorrectState' or
code == 'LimitExceeded')
##########################################################################
# Create signer for Authentication
# Input - config_profile and is_instance_principals and is_delegation_token
# Output - config and signer objects
##########################################################################
def create_signer(file_location, config_profile, is_instance_principals, is_delegation_token, is_security_token):
# if instance principals authentications
if is_instance_principals:
try:
signer = oci.auth.signers.InstancePrincipalsSecurityTokenSigner()
config = {'region': signer.region, 'tenancy': signer.tenancy_id}
return config, signer
except Exception:
print("Error obtaining instance principals certificate, aborting")
raise SystemExit
# -----------------------------
# Delegation Token
# -----------------------------
elif is_delegation_token:
try:
# check if env variables OCI_CONFIG_FILE, OCI_CONFIG_PROFILE exist and use them
env_config_file = os.environ.get('OCI_CONFIG_FILE')
env_config_section = os.environ.get('OCI_CONFIG_PROFILE')
# check if file exist
if env_config_file is None or env_config_section is None:
print(
"*** OCI_CONFIG_FILE and OCI_CONFIG_PROFILE env variables not found, abort. ***")
print("")
raise SystemExit
config = oci.config.from_file(env_config_file, env_config_section)
delegation_token_location = config["delegation_token_file"]
with open(delegation_token_location, 'r') as delegation_token_file:
delegation_token = delegation_token_file.read().strip()
# get signer from delegation token
signer = oci.auth.signers.InstancePrincipalsDelegationTokenSigner(
delegation_token=delegation_token)
return config, signer
except KeyError:
print("* Key Error obtaining delegation_token_file")
raise SystemExit
except Exception:
raise
# ---------------------------------------------------------------------------
# Security Token - Credit to Dave Knot (https://github.com/dns-prefetch)
# ---------------------------------------------------------------------------
elif is_security_token:
try:
# Read the token file from the security_token_file parameter of the .config file
config = oci.config.from_file(
oci.config.DEFAULT_LOCATION,
(config_profile if config_profile else oci.config.DEFAULT_PROFILE)
)
token_file = config['security_token_file']
token = None
with open(token_file, 'r') as f:
token = f.read()
# Read the private key specified by the .config file.
private_key = oci.signer.load_private_key_from_file(config['key_file'])
signer = oci.auth.signers.SecurityTokenSigner(token, private_key)
return config, signer
except KeyError:
print("* Key Error obtaining security_token_file")
raise SystemExit
except Exception:
raise
# -----------------------------
# config file authentication
# -----------------------------
else:
try:
config = oci.config.from_file(
file_location if file_location else oci.config.DEFAULT_LOCATION,
(config_profile if config_profile else oci.config.DEFAULT_PROFILE)
)
signer = oci.signer.Signer(
tenancy=config["tenancy"],
user=config["user"],
fingerprint=config["fingerprint"],
private_key_file_location=config.get("key_file"),
pass_phrase=oci.config.get_config_value_or_default(
config, "pass_phrase"),
private_key_content=config.get("key_content")
)
return config, signer
except Exception:
print(
f'** OCI Config was not found here : {oci.config.DEFAULT_LOCATION} or env varibles missing, aborting **')
raise SystemExit
##########################################################################
# Arg Parsing function to be updated
##########################################################################
def set_parser_arguments():
parser = argparse.ArgumentParser()
parser.add_argument(
'-i',
type=argparse.FileType('r'),
dest='input',
help="Input JSON File"
)
parser.add_argument(
'-o',
type=argparse.FileType('w'),
dest='output_csv',
help="CSV Output prefix")
result = parser.parse_args()
if len(sys.argv) < 3:
parser.print_help()
return None
return result
##########################################################################
# execute_report
##########################################################################
def execute_report():
# Get Command Line Parser
parser = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog, max_help_position=100, width=180))
parser.add_argument('-c', default="", dest='file_location',
help='OCI config file location')
parser.add_argument('-t', default="", dest='config_profile',
help='Config file section to use (tenancy profile) ')
parser.add_argument('-p', default="", dest='proxy',
help='Set Proxy (i.e. www-proxy-server.com:80) ')
parser.add_argument('--output-to-bucket', default="", dest='output_bucket',
help='Set Output bucket name (i.e. my-reporting-bucket) ')
parser.add_argument('--report-directory', default=None, dest='report_directory',
help='Set Output report directory by default it is the current date (i.e. reports-date) ')
parser.add_argument('--print-to-screen', default='True', dest='print_to_screen',
help='Set to False if you want to see only non-compliant findings (i.e. False) ')
parser.add_argument('--level', default=2, dest='level',
help='CIS Recommendation Level options are: 1 or 2. Set to 2 by default ')
parser.add_argument('--regions', default="", dest='regions',
help='Regions to run the compliance checks on, by default it will run in all regions. Sample input: us-ashburn-1,ca-toronto-1,eu-frankfurt-1')
parser.add_argument('--raw', action='store_true', default=False,
help='Outputs all resource data into CSV files')
parser.add_argument('--obp', action='store_true', default=False,
help='Checks for OCI best practices')
parser.add_argument('--redact_output', action='store_true', default=False,
help='Redacts OCIDs in output CSV files')
parser.add_argument('-ip', action='store_true', default=False,
dest='is_instance_principals', help='Use Instance Principals for Authentication ')
parser.add_argument('-dt', action='store_true', default=False,
dest='is_delegation_token', help='Use Delegation Token for Authentication in Cloud Shell')
parser.add_argument('-st', action='store_true', default=False,
dest='is_security_token', help='Authenticate using Security Token')
parser.add_argument('-v', action='store_true', default=False,
dest='version', help='Show the version of the script and exit.')
parser.add_argument('--debug', action='store_true', default=False,
dest='debug', help='Enables debugging messages. This feature is in beta')
cmd = parser.parse_args()
if cmd.version:
show_version()
sys.exit()
config, signer = create_signer(cmd.file_location, cmd.config_profile, cmd.is_instance_principals, cmd.is_delegation_token, cmd.is_security_token)
config['retry_strategy'] = oci.retry.DEFAULT_RETRY_STRATEGY
report = CIS_Report(config, signer, cmd.proxy, cmd.output_bucket, cmd.report_directory, cmd.print_to_screen, \
cmd.regions, cmd.raw, cmd.obp, cmd.redact_output, debug=cmd.debug)
csv_report_directory = report.generate_reports(int(cmd.level))
try:
if OUTPUT_TO_XLSX:
workbook = Workbook(csv_report_directory + '/Consolidated_Report.xlsx', {'in_memory': True})
for csvfile in glob.glob(csv_report_directory + '/*.csv'):
worksheet_name = csvfile.split(os.path.sep)[-1].replace(".csv", "").replace("raw_data_", "raw_").replace("Findings", "fds").replace("Best_Practices", "bps")
if "Identity_and_Access_Management" in worksheet_name:
worksheet_name = worksheet_name.replace("Identity_and_Access_Management", "IAM")
elif "Storage_Object_Storage" in worksheet_name:
worksheet_name = worksheet_name.replace("Storage_Object_Storage", "Object_Storage")
elif "raw_identity_groups_and_membership" in worksheet_name:
worksheet_name = worksheet_name.replace("raw_identity", "raw_iam")
elif "Cost_Tracking_Budgets_Best_Practices" in worksheet_name:
worksheet_name = worksheet_name.replace("Cost_Tracking_", "")
elif "Storage_File_Storage_Service" in worksheet_name:
worksheet_name = worksheet_name.replace("Storage_File_Storage_Service", "FSS")
elif "raw_cloud_guard_target" in worksheet_name:
# cloud guard targets are too large for a cell
continue
elif len(worksheet_name) > 31:
worksheet_name = worksheet_name.replace("_", "")
worksheet = workbook.add_worksheet(worksheet_name)
with open(csvfile, 'rt', encoding='unicode_escape') as f:
reader = csv.reader(f)
for r, row in enumerate(reader):
for c, col in enumerate(row):
# Skipping the deep link due to formating errors in xlsx
if "=HYPERLINK" not in col:
worksheet.write(r, c, col)
workbook.close()
except Exception as e:
print("**Failed to output to excel. Please use CSV files.**")
print(e)
##########################################################################
# Main
##########################################################################
if __name__ == "__main__":
execute_report()
\ No newline at end of file
+##########################################################################
+# Copyright (c) 2016, 2023, Oracle and/or its affiliates. All rights reserved.
+# This software is dual-licensed to you under the Universal Permissive License (UPL) 1.0 as shown at https://oss.oracle.com/licenses/upl or Apache License 2.0 as shown at http://www.apache.org/licenses/LICENSE-2.0. You may choose either license.
+#
+# cis_reports.py
+# @author base: Adi Zohar
+# @author: Josh Hammer, Andre Correa, Chad Russell, Jake Bloom and Olaf Heimburger
+#
+# Supports Python 3 and above
+#
+# coding: utf-8
+##########################################################################
+
+from __future__ import print_function
+import concurrent.futures
+import sys
+import argparse
+import datetime
+import pytz
+import oci
+import json
+import os
+import csv
+import itertools
+from threading import Thread
+import hashlib
+import re
+import requests
+
+try:
+ from xlsxwriter.workbook import Workbook
+ import glob
+ OUTPUT_TO_XLSX = True
+except Exception:
+ OUTPUT_TO_XLSX = False
+
+RELEASE_VERSION = "2.6.5"
+PYTHON_SDK_VERSION = "'2.110.0"
+UPDATED_DATE = "October 6, 2023"
+
+
+##########################################################################
+# debug print
+##########################################################################
+# DEBUG = False
+def debug(msg):
+ if DEBUG:
+ print(msg)
+
+##########################################################################
+# Print header centered
+##########################################################################
+def print_header(name):
+ chars = int(90)
+ print('')
+ print('#' * chars)
+ print('#' + name.center(chars - 2, ' ') + '#')
+ print('#' * chars)
+
+
+##########################################################################
+# show_version
+##########################################################################
+def show_version(verbose=False):
+ script_version = f'CIS Reports - Release {RELEASE_VERSION}'
+ script_updated = f'Version {RELEASE_VERSION} Updated on {UPDATED_DATE}'
+ if verbose:
+ print_header('Running ' + script_version)
+ print(script_updated)
+ print('Please use --help for more info')
+ print('\nTested oci-python-sdk version: ' + PYTHON_SDK_VERSION)
+ print('Installed oci-python-sdk version: ' + str(oci.__version__))
+ else:
+ print(script_updated)
+
+
+##########################################################################
+# CIS Reporting Class
+##########################################################################
+class CIS_Report:
+
+ # Class variables
+ _DAYS_OLD = 90
+ __KMS_DAYS_OLD = 365
+ __home_region = []
+
+ # Time Format
+ __iso_time_format = "%Y-%m-%dT%H:%M:%S"
+
+ # OCI Link
+ __oci_cloud_url = "https://cloud.oracle.com"
+ __oci_users_uri = __oci_cloud_url + "/identity/users/"
+ __oci_policies_uri = __oci_cloud_url + "/identity/policies/"
+ __oci_groups_uri = __oci_cloud_url + "/identity/groups/"
+ __oci_dynamic_groups_uri = __oci_cloud_url + "/identity/dynamicgroups/"
+ __oci_buckets_uri = __oci_cloud_url + "/object-storage/buckets/"
+ __oci_boot_volumes_uri = __oci_cloud_url + "/block-storage/boot-volumes/"
+ __oci_block_volumes_uri = __oci_cloud_url + "/block-storage/volumes/"
+ __oci_fss_uri = __oci_cloud_url + "/fss/file-systems/"
+ __oci_networking_uri = __oci_cloud_url + "/networking/vcns/"
+ __oci_adb_uri = __oci_cloud_url + "/db/adb/"
+ __oci_oicinstance_uri = __oci_cloud_url + "/oic/integration-instances/"
+ __oci_oacinstance_uri = __oci_cloud_url + "/analytics/instances/"
+ __oci_compartment_uri = __oci_cloud_url + "/identity/compartments/"
+ __oci_drg_uri = __oci_cloud_url + "/networking/drgs/"
+ __oci_cpe_uri = __oci_cloud_url + "/networking/cpes/"
+ __oci_ipsec_uri = __oci_cloud_url + "/networking/vpn-connections/"
+ __oci_events_uri = __oci_cloud_url + "/events/rules/"
+ __oci_loggroup_uri = __oci_cloud_url + "/logging/log-groups/"
+ __oci_vault_uri = __oci_cloud_url + "/security/kms/vaults/"
+ __oci_budget_uri = __oci_cloud_url + "/usage/budgets/"
+ __oci_cgtarget_uri = __oci_cloud_url + "/cloud-guard/targets/"
+ __oci_onssub_uri = __oci_cloud_url + "/notification/subscriptions/"
+ __oci_serviceconnector_uri = __oci_cloud_url + "/connector-hub/service-connectors/"
+ __oci_fastconnect_uri = __oci_cloud_url + "/networking/fast-connect/virtual-circuit/"
+
+ __oci_ocid_pattern = r'ocid1\.[a-z,0-9]*\.[a-z,0-9]*\.[a-z,0-9,-]*\.[a-z,0-9,\.]{20,}'
+
+ # Start print time info
+ start_datetime = datetime.datetime.now().replace(tzinfo=pytz.UTC)
+ start_time_str = str(start_datetime.strftime(__iso_time_format))
+ report_datetime = str(start_datetime.strftime("%Y-%m-%d_%H-%M-%S"))
+
+ # For User based key checks
+ api_key_time_max_datetime = start_datetime - \
+ datetime.timedelta(days=_DAYS_OLD)
+
+ str_api_key_time_max_datetime = api_key_time_max_datetime.strftime(__iso_time_format)
+ api_key_time_max_datetime = datetime.datetime.strptime(str_api_key_time_max_datetime, __iso_time_format)
+
+ # For KMS check
+ kms_key_time_max_datetime = start_datetime - \
+ datetime.timedelta(days=__KMS_DAYS_OLD)
+ str_kms_key_time_max_datetime = kms_key_time_max_datetime.strftime(__iso_time_format)
+ kms_key_time_max_datetime = datetime.datetime.strptime(str_kms_key_time_max_datetime, __iso_time_format)
+
+ def __init__(self, config, signer, proxy, output_bucket, report_directory, print_to_screen, regions_to_run_in, raw_data, obp, redact_output, debug=False):
+
+ # CIS Foundation benchmark 1.2
+ self.cis_foundations_benchmark_1_2 = {
+ '1.1': {'section': 'Identity and Access Management', 'recommendation_#': '1.1', 'Title': 'Ensure service level admins are created to manage resources of particular service', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['5.4', '6.7'], 'CCCS Guard Rail': '2,3', 'Remediation': []},
+ '1.2': {'section': 'Identity and Access Management', 'recommendation_#': '1.2', 'Title': 'Ensure permissions on all resources are given only to the tenancy administrator group', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['3.3'], 'CCCS Guard Rail': '1,2,3', 'Remediation': []},
+ '1.3': {'section': 'Identity and Access Management', 'recommendation_#': '1.3', 'Title': 'Ensure IAM administrators cannot update tenancy Administrators group', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['3.3', '5.4'], 'CCCS Guard Rail': '2,3', 'Remediation': []},
+ '1.4': {'section': 'Identity and Access Management', 'recommendation_#': '1.4', 'Title': 'Ensure IAM password policy requires minimum length of 14 or greater', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.1', '5.2'], 'CCCS Guard Rail': '2,3', 'Remediation': []},
+ '1.5': {'section': 'Identity and Access Management', 'recommendation_#': '1.5', 'Title': 'Ensure IAM password policy expires passwords within 365 days', 'Status': None, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.1', '5.2'], 'CCCS Guard Rail': '2,3', 'Remediation': []},
+ '1.6': {'section': 'Identity and Access Management', 'recommendation_#': '1.6', 'Title': 'Ensure IAM password policy prevents password reuse', 'Status': None, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['5.2'], 'CCCS Guard Rail': '2,3', 'Remediation': []},
+ '1.7': {'section': 'Identity and Access Management', 'recommendation_#': '1.7', 'Title': 'Ensure MFA is enabled for all users with a console password', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['6.3', '6.5'], 'CCCS Guard Rail': '1,2,3,4', 'Remediation': []},
+ '1.8': {'section': 'Identity and Access Management', 'recommendation_#': '1.8', 'Title': 'Ensure user API keys rotate within 90 days or less', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.1', '4.4'], 'CCCS Guard Rail': '6,7', 'Remediation': []},
+ '1.9': {'section': 'Identity and Access Management', 'recommendation_#': '1.9', 'Title': 'Ensure user customer secret keys rotate within 90 days or less', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.1', '5.2'], 'CCCS Guard Rail': '6,7', 'Remediation': []},
+ '1.10': {'section': 'Identity and Access Management', 'recommendation_#': '1.10', 'Title': 'Ensure user auth tokens rotate within 90 days or less', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.1', '5.2'], 'CCCS Guard Rail': '6,7', 'Remediation': []},
+ '1.11': {'section': 'Identity and Access Management', 'recommendation_#': '1.11', 'Title': 'Ensure API keys are not created for tenancy administrator users', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['5.4'], 'CCCS Guard Rail': '6,7', 'Remediation': []},
+ '1.12': {'section': 'Identity and Access Management', 'recommendation_#': '1.12', 'Title': 'Ensure all OCI IAM user accounts have a valid and current email address', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['5.1'], 'CCCS Guard Rail': '1,2,3', 'Remediation': []},
+ '1.13': {'section': 'Identity and Access Management', 'recommendation_#': '1.13', 'Title': 'Ensure Dynamic Groups are used for OCI instances, OCI Cloud Databases and OCI Function to access OCI resources', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['6.8'], 'CCCS Guard Rail': '6,7', 'Remediation': []},
+ '1.14': {'section': 'Identity and Access Management', 'recommendation_#': '1.14', 'Title': 'Ensure storage service-level admins cannot delete resources they manage', 'Status': None, 'Level': 2, 'Total': [], 'Findings': [], 'CISv8': ['5.4', '6.8'], 'CCCS Guard Rail': '2,3', 'Remediation': []},
+
+ '2.1': {'section': 'Networking', 'recommendation_#': '2.1', 'Title': 'Ensure no security lists allow ingress from 0.0.0.0/0 to port 22.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.4', '12.3'], 'CCCS Guard Rail': '2,3,5,7,9', 'Remediation': []},
+ '2.2': {'section': 'Networking', 'recommendation_#': '2.2', 'Title': 'Ensure no security lists allow ingress from 0.0.0.0/0 to port 3389.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.4', '12.3'], 'CCCS Guard Rail': '2,3,5,7,9', 'Remediation': []},
+ '2.3': {'section': 'Networking', 'recommendation_#': '2.3', 'Title': 'Ensure no network security groups allow ingress from 0.0.0.0/0 to port 22.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.4', '12.3'], 'CCCS Guard Rail': '2,3,5,7,9', 'Remediation': []},
+ '2.4': {'section': 'Networking', 'recommendation_#': '2.4', 'Title': 'Ensure no network security groups allow ingress from 0.0.0.0/0 to port 3389.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.4', '12.3'], 'CCCS Guard Rail': '2,3,5,7,9', 'Remediation': []},
+ '2.5': {'section': 'Networking', 'recommendation_#': '2.5', 'Title': 'Ensure the default security list of every VCN restricts all traffic except ICMP.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['12.3'], 'CCCS Guard Rail': '2,3,5,7,9', 'Remediation': []},
+ '2.6': {'section': 'Networking', 'recommendation_#': '2.6', 'Title': 'Ensure Oracle Integration Cloud (OIC) access is restricted to allowed sources.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.4', '12.3'], 'CCCS Guard Rail': '2,3,5,7,9', 'Remediation': []},
+ '2.7': {'section': 'Networking', 'recommendation_#': '2.7', 'Title': 'Ensure Oracle Analytics Cloud (OAC) access is restricted to allowed sources or deployed within a Virtual Cloud Network.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.4', '12.3'], 'CCCS Guard Rail': '2,3,5,7,9', 'Remediation': []},
+ '2.8': {'section': 'Networking', 'recommendation_#': '2.8', 'Title': 'Ensure Oracle Autonomous Shared Database (ADB) access is restricted or deployed within a VCN.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.4', '12.3'], 'CCCS Guard Rail': '2,3,5,7,9', 'Remediation': []},
+
+ '3.1': {'section': 'Logging and Monitoring', 'recommendation_#': '3.1', 'Title': 'Ensure audit log retention period is set to 365 days.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['8.10'], 'CCCS Guard Rail': '11', 'Remediation': []},
+ '3.2': {'section': 'Logging and Monitoring', 'recommendation_#': '3.2', 'Title': 'Ensure default tags are used on resources.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['1.1'], 'CCCS Guard Rail': '', 'Remediation': []},
+ '3.3': {'section': 'Logging and Monitoring', 'recommendation_#': '3.3', 'Title': 'Create at least one notification topic and subscription to receive monitoring alerts.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['8.2', '8.11'], 'CCCS Guard Rail': '11', 'Remediation': []},
+ '3.4': {'section': 'Logging and Monitoring', 'recommendation_#': '3.4', 'Title': 'Ensure a notification is configured for Identity Provider changes.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
+ '3.5': {'section': 'Logging and Monitoring', 'recommendation_#': '3.5', 'Title': 'Ensure a notification is configured for IdP group mapping changes.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
+ '3.6': {'section': 'Logging and Monitoring', 'recommendation_#': '3.6', 'Title': 'Ensure a notification is configured for IAM group changes.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
+ '3.7': {'section': 'Logging and Monitoring', 'recommendation_#': '3.7', 'Title': 'Ensure a notification is configured for IAM policy changes.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
+ '3.8': {'section': 'Logging and Monitoring', 'recommendation_#': '3.8', 'Title': 'Ensure a notification is configured for user changes.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
+ '3.9': {'section': 'Logging and Monitoring', 'recommendation_#': '3.9', 'Title': 'Ensure a notification is configured for VCN changes.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
+ '3.10': {'section': 'Logging and Monitoring', 'recommendation_#': '3.10', 'Title': 'Ensure a notification is configured for changes to route tables.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
+ '3.11': {'section': 'Logging and Monitoring', 'recommendation_#': '3.11', 'Title': 'Ensure a notification is configured for security list changes.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
+ '3.12': {'section': 'Logging and Monitoring', 'recommendation_#': '3.12', 'Title': 'Ensure a notification is configured for network security group changes.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
+ '3.13': {'section': 'Logging and Monitoring', 'recommendation_#': '3.13', 'Title': 'Ensure a notification is configured for changes to network gateways.', 'Status': False, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['4.2'], 'CCCS Guard Rail': '11', 'Remediation': []},
+ '3.14': {'section': 'Logging and Monitoring', 'recommendation_#': '3.14', 'Title': 'Ensure VCN flow logging is enabled for all subnets.', 'Status': True, 'Level': 2, 'Total': [], 'Findings': [], 'CISv8': ['8.2', '8.5', '13.6'], 'CCCS Guard Rail': '', 'Remediation': []},
+ '3.15': {'section': 'Logging and Monitoring', 'recommendation_#': '3.15', 'Title': 'Ensure Cloud Guard is enabled in the root compartment of the tenancy.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['8.2', '8.5', '8.11'], 'CCCS Guard Rail': '1,2,3', 'Remediation': []},
+ '3.16': {'section': 'Logging and Monitoring', 'recommendation_#': '3.16', 'Title': 'Ensure customer created Customer Managed Key (CMK) is rotated at least annually.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': [], 'CCCS Guard Rail': '6,7', 'Remediation': []},
+ '3.17': {'section': 'Logging and Monitoring', 'recommendation_#': '3.17', 'Title': 'Ensure write level Object Storage logging is enabled for all buckets.', 'Status': True, 'Level': 2, 'Total': [], 'Findings': [], 'CISv8': ['8.2'], 'CCCS Guard Rail': '', 'Remediation': []},
+
+ '4.1.1': {'section': 'Storage - Object Storage', 'recommendation_#': '4.1.1', 'Title': 'Ensure no Object Storage buckets are publicly visible.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['3.3'], 'CCCS Guard Rail': '', 'Remediation': []},
+ '4.1.2': {'section': 'Storage - Object Storage', 'recommendation_#': '4.1.2', 'Title': 'Ensure Object Storage Buckets are encrypted with a Customer-Managed Key (CMK).', 'Status': True, 'Level': 2, 'Total': [], 'Findings': [], 'CISv8': ['3.11'], 'CCCS Guard Rail': '', 'Remediation': []},
+ '4.1.3': {'section': 'Storage - Object Storage', 'recommendation_#': '4.1.3', 'Title': 'Ensure Versioning is Enabled for Object Storage Buckets.', 'Status': True, 'Level': 2, 'Total': [], 'Findings': [], 'CISv8': ['3.11'], 'CCCS Guard Rail': '', 'Remediation': []},
+ '4.2.1': {'section': 'Storage - Block Volumes', 'recommendation_#': '4.2.1', 'Title': 'Ensure Block Volumes are encrypted with Customer-Managed Keys.', 'Status': True, 'Level': 2, 'Total': [], 'Findings': [], 'CISv8': ['3.11'], 'CCCS Guard Rail': ''},
+ '4.2.2': {'section': 'Storage - Block Volumes', 'recommendation_#': '4.2.2', 'Title': 'Ensure Boot Volumes are encrypted with Customer-Managed Key.', 'Status': True, 'Level': 2, 'Total': [], 'Findings': [], 'CISv8': ['3.11'], 'CCCS Guard Rail': ''},
+ '4.3.1': {'section': 'Storage - File Storage Service', 'recommendation_#': '4.3.1', 'Title': 'Ensure File Storage Systems are encrypted with Customer-Managed Keys.', 'Status': True, 'Level': 2, 'Total': [], 'Findings': [], 'CISv8': ['3.11'], 'CCCS Guard Rail': '', 'Remediation': []},
+
+
+ '5.1': {'section': 'Asset Management', 'recommendation_#': '5.1', 'Title': 'Create at least one compartment in your tenancy to store cloud resources.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['3.1'], 'CCCS Guard Rail': '2,3,8,12', 'Remediation': []},
+ '5.2': {'section': 'Asset Management', 'recommendation_#': '5.2', 'Title': 'Ensure no resources are created in the root compartment.', 'Status': True, 'Level': 1, 'Total': [], 'Findings': [], 'CISv8': ['3.12'], 'CCCS Guard Rail': '1,2,3', 'Remediation': []}
+ }
+ # Remediation Report
+ self.cis_report_data = {
+ "1.1": {
+ "Description": "To apply least-privilege security principle, one can create service-level administrators in corresponding groups and assigning specific users to each service-level administrative group in a tenancy. This limits administrative access in a tenancy.
It means service-level administrators can only manage resources of a specific service.
Example policies for global/tenant level service-administrators\n
\nAllow group VolumeAdmins to manage volume-family in tenancy\nAllow group ComputeAdmins to manage instance-family in tenancy\nAllow group NetworkAdmins to manage virtual-network-family in tenancy\n
\nOrganizations have various ways of defining service-administrators. Some may prefer creating service administrators at a tenant level and some per department or per project or even per application environment (dev/test/production etc.). Either approach works so long as the policies are written to limit access given to the service-administrators.
Example policies for compartment level service-administrators
Allow group NonProdComputeAdmins to manage instance-family in compartment dev\nAllow group ProdComputeAdmins to manage instance-family in compartment production\nAllow group A-Admins to manage instance-family in compartment Project-A\nAllow group A-Admins to manage volume-family in compartment Project-A\n
",
+ "Rationale": "Creating service-level administrators helps in tightly controlling access to Oracle Cloud Infrastructure (OCI) services to implement the least-privileged security principle.",
+ "Impact": "",
+ "Remediation": "Refer to the policy syntax document and create new policies if the audit results indicate that the required policies are missing.",
+ "Recommendation": "",
+ "Observation": "custom IAM policy that grants tenancy administrative access."},
+ "1.2": {
+ "Description": "There is a built-in OCI IAM policy enabling the Administrators group to perform any action within a tenancy. In the OCI IAM console, this policy reads:
\nAllow group Administrators to manage all-resources in tenancy\n
Administrators create more users, groups, and policies to provide appropriate access to other groups.
Administrators should not allow any-other-group full access to the tenancy by writing a policy like this:
\nAllow group any-other-group to manage all-resources in tenancy\n
The access should be narrowed down to ensure the least-privileged principle is applied.",
+ "Rationale": "Permission to manage all resources in a tenancy should be limited to a small number of users in the 'Administrators' group for break-glass situations and to set up users/groups/policies when a tenancy is created.
No group other than 'Administrators' in a tenancy should need access to all resources in a tenancy, as this violates the enforcement of the least privilege principle.",
+ "Impact": "",
+ "Remediation": "Remove any policy statement that allows any group other than Administrators or any service access to manage all resources in the tenancy.",
+ "Recommendation": "Evaluate if tenancy-wide administrative access is needed for the identified policy and update it to be more restrictive.",
+ "Observation": "custom IAM policy that grants tenancy administrative access."},
+ "1.3": {
+ "Description": "Tenancy administrators can create more users, groups, and policies to provide other service administrators access to OCI resources.
For example, an IAM administrator will need to have access to manage\n resources like compartments, users, groups, dynamic-groups, policies, identity-providers, tenancy tag-namespaces, tag-definitions in the tenancy.
The policy that gives IAM-Administrators or any other group full access to 'groups' resources should not allow access to the tenancy 'Administrators' group.
The policy statements would look like:
\nAllow group IAMAdmins to inspect users in tenancy\nAllow group IAMAdmins to use users in tenancy where target.group.name != 'Administrators'\nAllow group IAMAdmins to inspect groups in tenancy\nAllow group IAMAdmins to use groups in tenancy where target.group.name != 'Administrators'\n
Note: You must include separate statements for 'inspect' access, because the target.group.name variable is not used by the ListUsers and ListGroups operations",
+ "Rationale": "These policy statements ensure that no other group can manage tenancy administrator users or the membership to the 'Administrators' group thereby gain or remove tenancy administrator access.",
+ "Impact": "",
+ "Remediation": "Verify the results to ensure that the policy statements that grant access to use or manage users or groups in the tenancy have a condition that excludes access to Administrators group or to users in the Administrators group.",
+ "Recommendation": "Evaluate if tenancy-wide administrative access is needed for the identified policy and update it to be more restrictive.",
+ "Observation": "custom IAM policy that grants tenancy administrative access."},
+ "1.4": {
+ "Description": "Password policies are used to enforce password complexity requirements. IAM password policies can be used to ensure password are at least a certain length and are composed of certain characters.
It is recommended the password policy require a minimum password length 14 characters and contain 1 non-alphabetic\ncharacter (Number or 'Special Character').",
+ "Rationale": "In keeping with the overall goal of having users create a password that is not overly weak, an eight-character minimum password length is recommended for an MFA account, and 14 characters for a password only account. In addition, maximum password length should be made as long as possible based on system/software capabilities and not restricted by policy.
In general, it is true that longer passwords are better (harder to crack), but it is also true that forced password length requirements can cause user behavior that is predictable and undesirable. For example, requiring users to have a minimum 16-character password may cause them to choose repeating patterns like fourfourfourfour or passwordpassword that meet the requirement but aren't hard to guess. Additionally, length requirements increase the chances that users will adopt other insecure practices, like writing them down, re-using them or storing them unencrypted in their documents.
Password composition requirements are a poor defense against guessing attacks. Forcing users to choose some combination of upper-case, lower-case, numbers, and special characters has a negative impact. It places an extra burden on users and many\nwill use predictable patterns (for example, a capital letter in the first position, followed by lowercase letters, then one or two numbers, and a “special character” at the end). Attackers know this, so dictionary attacks will often contain these common patterns and use the most common substitutions like, $ for s, @ for a, 1 for l, 0 for o.
Passwords that are too complex in nature make it harder for users to remember, leading to bad practices. In addition, composition requirements provide no defense against common attack types such as social engineering or insecure storage of passwords.",
+ "Impact": "",
+ "Remediation": "Update the password policy such as minimum length to 14, password must contain expected special characters and numeric characters.",
+ "Recommendation": "It is recommended the password policy require a minimum password length 14 characters and contain 1 non-alphabetic character (Number or 'Special Character').",
+ "Observation": "password policy/policies that do not enforce sufficient password complexity requirements."},
+ "1.5": {
+ "Description": "IAM password policies can require passwords to be rotated or expired after a given number of days. It is recommended that the password policy expire passwords after 365 and are changed immediately based on events.",
+ "Rationale": "Excessive password expiration requirements do more harm than good, because these requirements make users select predictable passwords, composed of sequential words and numbers that are closely related to each other.10 In these cases, the next password can be predicted based on the previous one (incrementing a number used in the password for example). Also, password expiration requirements offer no containment benefits because attackers will often use credentials as soon as they compromise them. Instead, immediate password changes should be based on key events including, but not\nlimited to:
1. Indication of compromise\n1. Change of user roles\n1. When a user leaves the organization.
Not only does changing passwords every few weeks or months frustrate the user, it's been suggested that it does more harm than good, because it could lead to bad practices by the user such as adding a character to the end of their existing password.
In addition, we also recommend a yearly password change. This is primarily because for all their good intentions users will share credentials across accounts. Therefore, even if a breach is publicly identified, the user may not see this notification, or forget they have an account on that site. This could leave a shared credential vulnerable indefinitely. Having an organizational policy of a 1-year (annual) password expiration is a reasonable compromise to mitigate this with minimal user burden.",
+ "Impact": "",
+ "Remediation": "Update the password policy by setting number of days configured in Expires after to 365.",
+ "Recommendation": "Evaluate password rotation policies are inline with your organizational standard.",
+ "Observation": "password policy/policies that do not require rotation."},
+ "1.6": {
+ "Description": "IAM password policies can prevent the reuse of a given password by the same user. It is recommended the password policy prevent the reuse of passwords.",
+ "Rationale": "Enforcing password history ensures that passwords are not reused in for a certain period of time by the same user. If a user is not allowed to use last 24 passwords, that window of time is greater. This helps maintain the effectiveness of password security.",
+ "Impact": "",
+ "Remediation": "Update the number of remembered passwords in previous passwords remembered setting to 24 in the password policy.",
+ "Recommendation": "Evaluate password reuse policies are inline with your organizational standard.",
+ "Observation": "password policy/policies that do not prevent reuse."},
+ "1.7": {
+ "Description": "Multi-factor authentication is a method of authentication that requires the use of more than one factor to verify a user's identity.
With MFA enabled in the IAM service, when a user signs in to Oracle Cloud Infrastructure, they are prompted for their user name and password, which is the first factor (something that they know). The user is then prompted to provide a second verification code from a registered MFA device, which is the second factor (something that they have). The two factors work together, requiring an extra layer of security to verify the user's identity and complete the sign-in process.
OCI IAM supports two-factor authentication using a password (first factor) and a device that can generate a time-based one-time password (TOTP) (second factor).
See [OCI documentation](https://docs.cloud.oracle.com/en-us/iaas/Content/Identity/Tasks/usingmfa.htm) for more details.",
+ "Rationale": "Multi factor authentication adds an extra layer of security during the login process and makes it harder for unauthorized users to gain access to OCI resources.",
+ "Impact": "",
+ "Remediation": "Each user must enable MFA for themselves using a device they will have access to every time they sign in. An administrator cannot enable MFA for another user but can enforce MFA by identifying the list of non-complaint users, notifying them or disabling access by resetting password for non-complaint accounts.",
+ "Recommendation": "Evaluate if local users are required. For Break Glass accounts ensure MFA is in place.",
+ "Observation": "users with Password access but not MFA."},
+ "1.8": {
+ "Description": "API keys are used by administrators, developers, services and scripts for accessing OCI APIs directly or via SDKs/OCI CLI to search, create, update or delete OCI resources.
The API key is an RSA key pair. The private key is used for signing the API requests and the public key is associated with a local or synchronized user's profile.",
+ "Rationale": "It is important to secure and rotate an API key every 90 days or less as it provides the same level of access that a user it is associated with has.
In addition to a security engineering best practice, this is also a compliance requirement. For example, PCI-DSS Section 3.6.4 states, \"Verify that key-management procedures include a defined cryptoperiod for each key type in use and define a process for key changes at the end of the defined crypto period(s).\"",
+ "Impact": "",
+ "Remediation": "Delete any API Keys with a date of 90 days or older under the Created column of the API Key table.",
+ "Recommendation": "Evaluate if APIs Keys are still used/required and rotate API Keys It is important to secure and rotate an API key every 90 days or less as it provides the same level of access that a user it is associated with has.",
+ "Observation": "user(s) with APIs that have not been rotated with 90 days."},
+ "1.9": {
+ "Description": "Object Storage provides an API to enable interoperability with Amazon S3. To use this Amazon S3 Compatibility API, you need to generate the signing key required to authenticate with Amazon S3.
This special signing key is an Access Key/Secret Key pair. Oracle generates the Customer Secret key to pair with the Access Key.",
+ "Rationale": "It is important to secure and rotate an customer secret key every 90 days or less as it provides the same level of object storage access that a user is associated with has.",
+ "Impact": "",
+ "Remediation": "Delete any Access Keys with a date of 90 days or older under the Created column of the Customer Secret Keys.",
+ "Recommendation": "Evaluate if Customer Secret Keys are still used/required and rotate the Keys accordingly.",
+ "Observation": "users with Customer Secret Keys that have not been rotated with 90 days."},
+ "1.10": {
+ "Description": "Auth tokens are authentication tokens generated by Oracle. You use auth tokens to authenticate with APIs that do not support the Oracle Cloud Infrastructure signature-based authentication. If the service requires an auth token, the service-specific documentation instructs you to generate one and how to use it.",
+ "Rationale": "It is important to secure and rotate an auth token every 90 days or less as it provides the same level of access to APIs that do not support the OCI signature-based authentication as the user associated to it.",
+ "Impact": "",
+ "Remediation": "Delete any auth token with a date of 90 days or older under the Created column of the Auth Tokens.",
+ "Recommendation": "Evaluate if Auth Tokens are still used/required and rotate Auth tokens.",
+ "Observation": "user(s) with auth tokens that have not been rotated in 90 days."},
+ "1.11": {
+ "Description": "Tenancy administrator users have full access to the organization's OCI tenancy. API keys associated with user accounts are used for invoking the OCI APIs via custom programs or clients like CLI/SDKs. The clients are typically used for performing day-to-day operations and should never require full tenancy access. Service-level administrative users with API keys should be used instead.",
+ "Rationale": "For performing day-to-day operations tenancy administrator access is not needed.\nService-level administrative users with API keys should be used to apply privileged security principle.",
+ "Impact": "",
+ "Remediation": "For each tenancy administrator user who has an API key,select API Keys from the menu and delete any associated keys from the API Keys table.",
+ "Recommendation": "Evaluate if a user with API Keys requires Administrator access and use a least privilege approach.",
+ "Observation": "users with Administrator access and API Keys."},
+ "1.12": {
+ "Description": "All OCI IAM local user accounts have an email address field associated with the account. It is recommended to specify an email address that is valid and current.
If you have an email address in your user profile, you can use the Forgot Password link on the sign on page to have a temporary password sent to you.",
+ "Rationale": "Having a valid and current email address associated with an OCI IAM local user account allows you to tie the account to identity in your organization. It also allows that user to reset their password if it is forgotten or lost.",
+ "Impact": "",
+ "Remediation": "Update the current email address in the email text box on exch non compliant user.",
+ "Recommendation": "Add emails to users to allow them to use the 'Forgot Password' feature and uniquely identify the user. For service accounts it could be a mail alias.",
+ "Observation": "without an email."},
+ "1.13": {
+ "Description": "OCI instances, OCI database and OCI functions can access other OCI resources either via an OCI API key associated to a user or by being including in a Dynamic Group that has an IAM policy granting it the required access. Access to OCI Resources refers to making API calls to another OCI resource like Object Storage, OCI Vaults, etc.",
+ "Rationale": "Dynamic Groups reduces the risks related to hard coded credentials. Hard coded API keys can be shared and require rotation which can open them up to being compromised. Compromised credentials could allow access to OCI services outside of the expected radius.",
+ "Impact": "For an OCI instance that contains embedded credential audit the scripts and environment variables to ensure that none of them contain OCI API Keys or credentials.",
+ "Remediation": "Create Dynamic group and Enter Matching Rules to that includes the instances accessing your OCI resources. Refer:\"https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/managingdynamicgroups.htm\".",
+ "Recommendation": "Evaluate how your instances, functions, and autonomous database interact with other OCI services.",
+ "Observation": "Dynamic Groups reduces the risks related to hard coded credentials. Hard coded API keys can be shared and require rotation which can open them up to being compromised. Compromised credentials could allow access to OCI services outside of the expected radius."},
+ "1.14": {
+ "Description": "To apply the separation of duties security principle, one can restrict service-level administrators from being able to delete resources they are managing. It means service-level administrators can only manage resources of a specific service but not delete resources for that specific service.
Example policies for global/tenant level for block volume service-administrators:\n
\nAllow group VolumeUsers to manage volumes in tenancy where request.permission!='VOLUME_DELETE'\nAllow group VolumeUsers to manage volume-backups in tenancy where request.permission!='VOLUME_BACKUP_DELETE'\n
Example policies for global/tenant level for file storage system service-administrators:
\nAllow group FileUsers to manage file-systems in tenancy where request.permission!='FILE_SYSTEM_DELETE'\nAllow group FileUsers to manage mount-targets in tenancy where request.permission!='MOUNT_TARGET_DELETE'\nAllow group FileUsers to manage export-sets in tenancy where request.permission!='EXPORT_SET_DELETE'\n
Example policies for global/tenant level for object storage system service-administrators:
\nAllow group BucketUsers to manage objects in tenancy where request.permission!='OBJECT_DELETE'\nAllow group BucketUsers to manage buckets in tenancy where request.permission!='BUCKET_DELETE'\n
",
+ "Rationale": "Creating service-level administrators without the ability to delete the resource they are managing helps in tightly controlling access to Oracle Cloud Infrastructure (OCI) services by implementing the separation of duties security principle.", "Impact": "",
+ "Remediation": "Add the appropriate where condition to any policy statement that allows the storage service-level to manage the storage service.",
+ "Recommendation": "To apply a separation of duties security principle, it is recommended to restrict service-level administrators from being able to delete resources they are managing.",
+ "Observation": "IAM Policies that give service administrator the ability to delete service resources."},
+ "2.1": {
+ "Description": "Security lists provide stateful or stateless filtering of ingress/egress network traffic to OCI resources on a subnet level. It is recommended that no security group allows unrestricted ingress access to port 22.",
+ "Rationale": "Removing unfettered connectivity to remote console services, such as Secure Shell (SSH), reduces a server's exposure to risk.",
+ "Impact": "For updating an existing environment, care should be taken to ensure that administrators currently relying on an existing ingress from 0.0.0.0/0 have access to ports 22 and/or 3389 through another network security group or security list.",
+ "Remediation": "For each security list in the returned results, click the security list name. Either edit the ingress rule to be more restrictive, delete the ingress rule or click on the VCN and terminate the security list as appropriate.",
+ "Recommendation": "Review the security lists. If they are not used(attached to a subnet) they should be deleted if possible or empty. For attached security lists it is recommended to restrict the CIDR block to only allow access to Port 22 from known networks.",
+ "Observation": "Security lists that allow internet access to port 22. (Note this does not necessarily mean external traffic can reach a compute instance)."},
+ "2.2": {
+ "Description": "Security lists provide stateful or stateless filtering of ingress/egress network traffic to OCI resources on a subnet level. It is recommended that no security group allows unrestricted ingress access to port 3389.",
+ "Rationale": "Removing unfettered connectivity to remote console services, such as Remote Desktop Protocol (RDP), reduces a server's exposure to risk.",
+ "Impact": "For updating an existing environment, care should be taken to ensure that administrators currently relying on an existing ingress from 0.0.0.0/0 have access to ports 22 and/or 3389 through another network security group or security list.",
+ "Remediation": "For each security list in the returned results, click the security list name. Either edit the ingress rule to be more restrictive, delete the ingress rule or click on the VCN and terminate the security list as appropriate.",
+ "Recommendation": "Review the security lists. If they are not used(attached to a subnet) they should be deleted if possible or empty. For attached security lists it is recommended to restrict the CIDR block to only allow access to Port 3389 from known networks.",
+ "Observation": "Security lists that allow internet access to port 3389. (Note this does not necessarily mean external traffic can reach a compute instance)."
+ },
+ "2.3": {
+ "Description": "Network security groups provide stateful filtering of ingress/egress network traffic to OCI resources. It is recommended that no security group allows unrestricted ingress access to port 22.",
+ "Rationale": "Removing unfettered connectivity to remote console services, such as Secure Shell (SSH), reduces a server's exposure to risk.",
+ "Impact": "For updating an existing environment, care should be taken to ensure that administrators currently relying on an existing ingress from 0.0.0.0/0 have access to ports 22 and/or 3389 through another network security group or security list.",
+ "Remediation": "Using the details returned from the audit procedure either Remove the security rules or Update the security rules.",
+ "Recommendation": "Review the network security groups. If they are not used(attached to a subnet) they should be deleted if possible or empty. For attached security lists it is recommended to restrict the CIDR block to only allow access to Port 3389 from known networks.",
+ "Observation": "Network security groups that allow internet access to port 22. (Note this does not necessarily mean external traffic can reach a compute instance)."
+ },
+ "2.4": {
+ "Description": "Network security groups provide stateful filtering of ingress/egress network traffic to OCI resources. It is recommended that no security group allows unrestricted ingress access to port 3389.",
+ "Rationale": "Removing unfettered connectivity to remote console services, such as Remote Desktop Protocol (RDP), reduces a server's exposure to risk.",
+ "Impact": "For updating an existing environment, care should be taken to ensure that administrators currently relying on an existing ingress from 0.0.0.0/0 have access to ports 22 and/or 3389 through another network security group or security list.",
+ "Remediation": "Using the details returned from the audit procedure either Remove the security rules or Update the security rules.",
+ "Recommendation": "Review the network security groups. If they are not used(attached to a subnet) they should be deleted if possible or empty. For attached network security groups it is recommended to restrict the CIDR block to only allow access to Port 3389 from known networks.",
+ "Observation": "Network security groups that allow internet access to port 3389. (Note this does not necessarily mean external traffic can reach a compute instance)."
+ },
+ "2.5": {
+ "Description": "A default security list is created when a Virtual Cloud Network (VCN) is created. Security lists provide stateful filtering of ingress and egress network traffic to OCI resources. It is recommended no security list allows unrestricted ingress access to Secure Shell (SSH) via port 22.",
+ "Rationale": "Removing unfettered connectivity to remote console services, such as SSH on port 22, reduces a server's exposure to unauthorized access.",
+ "Impact": "For updating an existing environment, care should be taken to ensure that administrators currently relying on an existing ingress from 0.0.0.0/0 have access to ports 22 and/or 3389 through another security group.",
+ "Remediation": "Select Default Security List for and Remove the Ingress Rule with Source 0.0.0.0/0, IP Protocol 22 and Destination Port Range 22.",
+ "Recommendation": "Create specific custom security lists with workload specific rules and attach to subnets.",
+ "Observation": "Default Security lists that allow more traffic then ICMP."
+ },
+ "2.6": {
+ "Description": "Oracle Integration (OIC) is a complete, secure, but lightweight integration solution that enables you to connect your applications in the cloud. It simplifies connectivity between your applications and connects both your applications that live in the cloud and your applications that still live on premises. Oracle Integration provides secure, enterprise-grade connectivity regardless of the applications you are connecting or where they reside. OIC instances are created within an Oracle managed secure private network with each having a public endpoint. The capability to configure ingress filtering of network traffic to protect your OIC instances from unauthorized network access is included. It is recommended that network access to your OIC instances be restricted to your approved corporate IP Addresses or Virtual Cloud Networks (VCN)s.",
+ "Rationale": "Restricting connectivity to OIC Instances reduces an OIC instance's exposure to risk.",
+ "Impact": "When updating ingress filters for an existing environment, care should be taken to ensure that IP addresses and VCNs currently used by administrators, users, and services to access your OIC instances are included in the updated filters.",
+ "Remediation": "For each OIC instance in the returned results, select the OIC Instance name,edit the Network Access to be more restrictive.",
+ "Recommendation": "It is recommended that OIC Network Access is restricted to your corporate IP Addresses or VCNs for OIC Instances.",
+ "Observation": "OIC Instances that allow unfiltered public ingress traffic (Authentication and authorization is still required)."
+ },
+ "2.7": {
+ "Description": "Oracle Analytics Cloud (OAC) is a scalable and secure public cloud service that provides a full set of capabilities to explore and perform collaborative analytics for you, your workgroup, and your enterprise. OAC instances provide ingress filtering of network traffic or can be deployed with in an existing Virtual Cloud Network VCN. It is recommended that all new OAC instances be deployed within a VCN and that the Access Control Rules are restricted to your corporate IP Addresses or VCNs for existing OAC instances.",
+ "Rationale": "Restricting connectivity to Oracle Analytics Cloud instances reduces an OAC instance's exposure to risk.",
+ "Impact": "When updating ingress filters for an existing environment, care should be taken to ensure that IP addresses and VCNs currently used by administrators, users, and services to access your OAC instances are included in the updated filters. Also, these changes will temporarily bring the OAC instance offline.",
+ "Remediation": "For each OAC instance in the returned results, select the OAC Instance name edit the Access Control Rules by clicking +Another Rule and add rules as required.",
+ "Recommendation": "It is recommended that all new OAC instances be deployed within a VCN and that the Access Control Rules are restricted to your corporate IP Addresses or VCNs for existing OAC instances.",
+ "Observation": "OAC Instances that allow unfiltered public ingress traffic (Authentication and authorization is still required)."
+ },
+ "2.8": {
+ "Description": "Oracle Autonomous Database Shared (ADB-S) automates database tuning, security, backups, updates, and other routine management tasks traditionally performed by DBAs. ADB-S provide ingress filtering of network traffic or can be deployed within an existing Virtual Cloud Network (VCN). It is recommended that all new ADB-S databases be deployed within a VCN and that the Access Control Rules are restricted to your corporate IP Addresses or VCNs for existing ADB-S databases.",
+ "Rationale": "Restricting connectivity to ADB-S Databases reduces an ADB-S database's exposure to risk.",
+ "Impact": "When updating ingress filters for an existing environment, care should be taken to ensure that IP addresses and VCNs currently used by administrators, users, and services to access your ADB-S instances are included in the updated filters.",
+ "Remediation": "For each ADB-S database in the returned results, select the ADB-S database name edit the Access Control Rules by clicking +Another Rule and add rules as required.",
+ "Recommendation": "It is recommended that all new ADB-S databases be deployed within a VCN and that the Access Control Rules are restricted to your corporate IP Addresses or VCNs for existing ADB-S databases.",
+ "Observation": "ADB-S Instances that allow unfiltered public ingress traffic (Authentication and authorization is still required)."
+ },
+ "3.1": {
+ "Description": "Ensuring audit logs are kept for 365 days.",
+ "Rationale": "Log retention controls how long activity logs should be retained. Studies have shown that The Mean Time to Detect (MTTD) a cyber breach is anywhere from 30 days in some sectors to up to 206 days in others. Retaining logs for at least 365 days or more will provide the ability to respond to incidents.",
+ "Impact": "There is no performance impact when enabling the above described features but additional audit data will be retained.",
+ "Remediation": "Go to the Tenancy Details page and edit Audit Retention Policy by setting AUDIT RETENTION PERIOD to 365.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "3.2": {
+ "Description": "Using default tags is a way to ensure all resources that support tags are tagged during creation. Tags can be based on static values or based on computed values. It is recommended to setup default tags early on to ensure all created resources will get tagged.\nTags are scoped to Compartments and are inherited by Child Compartments. The recommendation is to create default tags like “CreatedBy” at the Root Compartment level to ensure all resources get tagged.\nWhen using Tags it is important to ensure that Tag Namespaces are protected by IAM Policies otherwise this will allow users to change tags or tag values.\nDepending on the age of the OCI Tenancy there may already be Tag defaults setup at the Root Level and no need for further action to implement this action.",
+ "Rationale": "In the case of an incident having default tags like “CreatedBy” applied will provide info on who created the resource without having to search the Audit logs.",
+ "Impact": "There is no performance impact when enabling the above described features",
+ "Remediation": "Update the root compartments tag default link.In the Tag Defaults table verify that there is a Tag with a value of \"${iam.principal.names}\" and a Tag Key Status of Active. Also cretae a Tag key definition by providing a Tag Key, Description and selecting 'Static Value' for Tag Value Type.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "3.3": {
+ "Description": "Notifications provide a multi-channel messaging service that allow users and applications to be notified of events of interest occurring within OCI. Messages can be sent via eMail, HTTPs, PagerDuty, Slack or the OCI Function service. Some channels, such as eMail require confirmation of the subscription before it becomes active.",
+ "Rationale": "Creating one or more notification topics allow administrators to be notified of relevant changes made to OCI infrastructure.",
+ "Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
+ "Remediation": "Create a Topic in the notifications service under the appropriate compartment and add the subscriptions with current email address and correct protocol.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "3.4": {
+ "Description": "It is recommended to setup an Event Rule and Notification that gets triggered when Identity Providers are created, updated or deleted. Event Rules are compartment scoped and will detect events in child compartments. It is recommended to create the Event rule at the root compartment level.",
+ "Rationale": "OCI Identity Providers allow management of User ID / passwords in external systems and use of those credentials to access OCI resources. Identity Providers allow users to single sign-on to OCI console and have other OCI credentials like API Keys.\nMonitoring and alerting on changes to Identity Providers will help in identifying changes to the security posture.",
+ "Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
+ "Remediation": "Create a Rule Condition in the Events services by selecting Identity in the Service Name Drop-down and selecting Identity Provider – Create, Identity Provider - Delete and Identity Provider – Update. In the Actions section select Notifications as Action Type and selct the compartment and topic to be used.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "3.5": {
+ "Description": "It is recommended to setup an Event Rule and Notification that gets triggered when Identity Provider Group Mappings are created, updated or deleted. Event Rules are compartment scoped and will detect events in child compartments. It is recommended to create the Event rule at the root compartment level",
+ "Rationale": "IAM Policies govern access to all resources within an OCI Tenancy. IAM Policies use OCI Groups for assigning the privileges. Identity Provider Groups could be mapped to OCI Groups to assign privileges to federated users in OCI. Monitoring and alerting on changes to Identity Provider Group mappings will help in identifying changes to the security posture.",
+ "Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
+ "Remediation": "Find and click the Rule that handles Idp Group Mapping Changes. Click the Edit Rule button and verify that the RuleConditions section contains a condition for the Service Identity and Event Types: Idp Group Mapping – Create, Idp Group Mapping – Delete, and Idp Group Mapping – Update and confirm Action Type contains: Notifications and that a valid Topic is referenced.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "3.6": {
+ "Description": "It is recommended to setup an Event Rule and Notification that gets triggered when IAM Groups are created, updated or deleted. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.",
+ "Rationale": "IAM Groups control access to all resources within an OCI Tenancy.\n Monitoring and alerting on changes to IAM Groups will help in identifying changes to satisfy least privilege principle.",
+ "Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
+ "Remediation": "Create a Rule Condition by selecting Identity in the Service Name Drop-down and selecting Group – Create, Group – Delete and Group – Update. In the Actions section select Notifications as Action Type and selct the compartment and topic to be used.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "3.7": {
+ "Description": "It is recommended to setup an Event Rule and Notification that gets triggered when IAM Policies are created, updated or deleted. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.",
+ "Rationale": "IAM Policies govern access to all resources within an OCI Tenancy.\n Monitoring and alerting on changes to IAM policies will help in identifying changes to the security posture.",
+ "Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
+ "Remediation": "Create a Rule Condition by selecting Identity in the Service Name Drop-down and selecting Policy – Change Compartment, Policy – Create, Policy - Delete and Policy – Update. In the Actions section select Notifications as Action Type and selct the compartment and topic to be used.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "3.8": {
+ "Description": "It is recommended to setup an Event Rule and Notification that gets triggered when IAM Users are created, updated, deleted, capabilities updated, or state updated. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.",
+ "Rationale": "Users use or manage Oracle Cloud Infrastructure resources.\n Monitoring and alerting on changes to Users will help in identifying changes to the security posture.",
+ "Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
+ "Remediation": "Edit Rule that handles IAM User Changes and verify that the Rule Conditions section contains a condition for the Service Identity and Event Types: User – Create, User – Delete, User – Update, User Capabilities – Update, User State – Update.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "3.9": {
+ "Description": "It is recommended to setup an Event Rule and Notification that gets triggered when Virtual Cloud Networks are created, updated or deleted. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.",
+ "Rationale": "Virtual Cloud Networks (VCNs) closely resembles a traditional network.\n Monitoring and alerting on changes to VCNs will help in identifying changes to the security posture.",
+ "Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
+ "Remediation": "Edit Rule that handles VCN Changes and verify that the RuleConditions section contains a condition for the Service Networking and Event Types: VCN – Create, VCN - Delete, and VCN – Update.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "3.10": {
+ "Description": "It is recommended to setup an Event Rule and Notification that gets triggered when route tables are created, updated or deleted. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.",
+ "Rationale": "Route tables control traffic flowing to or from Virtual Cloud Networks and Subnets.\n Monitoring and alerting on changes to route tables will help in identifying changes these traffic flows.",
+ "Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
+ "Remediation": "Edit Rule that handles Route Table Changes and verify that the RuleConditions section contains a condition for the Service Networking and Event Types: Route Table – Change Compartment, Route Table – Create, Route Table - Delete, and Route Table – Update.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "3.11": {
+ "Description": "It is recommended to setup an Event Rule and Notification that gets triggered when security lists are created, updated or deleted. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.",
+ "Rationale": "Security Lists control traffic flowing into and out of Subnets within a Virtual Cloud Network.\n Monitoring and alerting on changes to Security Lists will help in identifying changes to these security controls.",
+ "Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
+ "Remediation": "Edit Rule that handles Security List Changes and verify that the RuleConditions section contains a condition for the Service Networking and Event Types: Security List – Change Compartment, Security List – Create, Security List - Delete, and Security List – Update.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "3.12": {
+ "Description": "It is recommended to setup an Event Rule and Notification that gets triggered when network security groups are created, updated or deleted. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.",
+ "Rationale": "Network Security Groups control traffic flowing between Virtual Network Cards attached to Compute instances.\n Monitoring and alerting on changes to Network Security Groups will help in identifying changes these security controls.",
+ "Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
+ "Remediation": "Edit Rule that handles Network Security Group Changes and verify that the RuleConditions section contains a condition for the Service Networking and Event Types: Network Security Group – Change Compartment, Network Security Group – Create, Network Security Group - Delete, and Network Security Group – Update.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "3.13": {
+ "Description": "It is recommended to setup an Event Rule and Notification that gets triggered when Network Gateways are created, updated, deleted, attached, detached, or moved. This recommendation includes Internet Gateways, Dynamic Routing Gateways, Service Gateways, Local Peering Gateways, and NAT Gateways. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.",
+ "Rationale": "Network Gateways act as routers between VCNs and the Internet, Oracle Services Networks, other VCNS, and on-premise networks.\n Monitoring and alerting on changes to Network Gateways will help in identifying changes to the security posture.",
+ "Impact": "There is no performance impact when enabling the above described features but depending on the amount of notifications sent per month there may be a cost associated.",
+ "Remediation": "Edit Rule that handles Network Gateways Changes and verify that the RuleConditions section contains a condition for the Service Networking and Event Types: DRG – Create, DRG - Delete, DRG - Update, DRG Attachment – Create, DRG Attachment – Delete, DRG Attachment - Update, Internet Gateway – Create, Internet Gateway – Delete, Internet Gateway - Update, Internet Gateway – Change Compartment, Local Peering Gateway – Create, Local Peering Gateway – Delete End, Local Peering Gateway - Update, Local Peering Gateway – Change Compartment, NAT Gateway – Create, NAT Gateway – Delete, NAT Gateway - Update, NAT Gateway – Change Compartment,Compartment, Service Gateway – Create, Service Gateway – Delete Begin, Service Gateway – Delete End, Service Gateway – Update, Service Gateway – Attach Service, Service Gateway – Detach Service, Service Gateway – Change Compartment.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "3.14": {
+ "Description": "VCN flow logs record details about traffic that has been accepted or rejected based on the security list rule.",
+ "Rationale": "Enabling VCN flow logs enables you to monitor traffic flowing within your virtual network and can be used to detect anomalous traffic.",
+ "Impact": "Enabling VCN flow logs will not affect the performance of your virtual network but it will generate additional use of object storage that should be controlled via object lifecycle management.
By default, VCN flow logs are stored for 30 days in object storage. Users can specify a longer retention period.",
+ "Remediation": "Enable Flow Logs (all records) on Virtual Cloud Networks (subnets) under the relevant resource compartment. Before hand create Log group if not exist in the Log services.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "3.15": {
+ "Description": "Cloud Guard detects misconfigured resources and insecure activity within a tenancy and provides security administrators with the visibility to resolve these issues. Upon detection, Cloud Guard can suggest, assist, or take corrective actions to mitigate these issues. Cloud Guard should be enabled in the root compartment of your tenancy with the default configuration, activity detectors and responders.",
+ "Rationale": "Cloud Guard provides an automated means to monitor a tenancy for resources that are configured in an insecure manner as well as risky network activity from these resources.",
+ "Impact": "There is no performance impact when enabling the above described features, but additional IAM policies will be required.",
+ "Remediation": "Enable the cloud guard by selecting the services in the menu and provide appropriate reporting region and other configurations.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "3.16": {
+ "Description": "Oracle Cloud Infrastructure Vault securely stores master encryption keys that protect your encrypted data. You can use the Vault service to rotate keys to generate new cryptographic material. Periodically rotating keys limits the amount of data encrypted by one key version.",
+ "Rationale": "Rotating keys annually limits the data encrypted under one key version. Key rotation thereby reduces the risk in case a key is ever compromised.",
+ "Impact": "",
+ "Remediation": "Select the security service and select vault. Ensure the date of each Master Encryption Key under the Created column of the Master Encryption key is no more than 365 days old.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "3.17": {
+ "Description": "Object Storage write logs will log all write requests made to objects in a bucket.",
+ "Rationale": "Enabling an Object Storage write log, the 'requestAction' property would contain values of 'PUT', 'POST', or 'DELETE'. This will provide you more visibility into changes to objects in your buckets.",
+ "Impact": "There is no performance impact when enabling the above described features, but will generate additional use of object storage that should be controlled via object lifecycle management.
By default, Object Storage logs are stored for 30 days in object storage. Users can specify a longer retention period.",
+ "Remediation": "To the relevant bucket enable log by providing Write Access Events from the Log Category. Beforehand create log group if required.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "4.1.1": {
+ "Description": "A bucket is a logical container for storing objects. It is associated with a single compartment that has policies that determine what action a user can perform on a bucket and on all the objects in the bucket. It is recommended that no bucket be publicly accessible.",
+ "Rationale": "Removing unfettered reading of objects in a bucket reduces an organization's exposure to data loss.",
+ "Impact": "For updating an existing bucket, care should be taken to ensure objects in the bucket can be accessed through either IAM policies or pre-authenticated requests.",
+ "Remediation": "Edit the visibility into 'private' for each Bucket.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "4.1.2": {
+ "Description": "Oracle Object Storage buckets support encryption with a Customer Managed Key (CMK). By default, Object Storage buckets are encrypted with an Oracle managed key.",
+ "Rationale": "Encryption of Object Storage buckets with a Customer Managed Key (CMK) provides an additional level of security on your data by allowing you to manage your own encryption key lifecycle management for the bucket.",
+ "Impact": "Encrypting with a Customer Managed Keys requires a Vault and a Customer Master Key. In addition, you must authorize Object Storage service to use keys on your behalf.
Required Policy:\n
\nAllow service objectstorage-<region_name>, to use keys in compartment <compartment-id> where target.key.id = '<key_OCID>'
",
+ "Remediation": "Assign Master encryption key to Encryption key in every Object storage under Bucket name by clicking assign and select vault.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "4.1.3": {
+ "Description": "A bucket is a logical container for storing objects. Object versioning is enabled at the bucket level and is disabled by default upon creation. Versioning directs Object Storage to automatically create an object version each time a new object is uploaded, an existing object is overwritten, or when an object is deleted. You can enable object versioning at bucket creation time or later.",
+ "Rationale": "Versioning object storage buckets provides for additional integrity of your data. Management of data integrity is critical to protecting and accessing protected data. Some customers want to identify object storage buckets without versioning in order to apply their own data lifecycle protection and management policy.",
+ "Impact": "",
+ "Remediation": "Enable Versioning by clicking on every bucket by editing the bucket configuration.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "4.2.1": {
+ "Description": "Oracle Cloud Infrastructure Block Volume service lets you dynamically provision and manage block storage volumes. By default, the Oracle service manages the keys that encrypt this block volume. Block Volumes can also be encrypted using a customer managed key.",
+ "Rationale": "Encryption of block volumes provides an additional level of security for your data. Management of encryption keys is critical to protecting and accessing protected data. Customers should identify block volumes encrypted with Oracle service managed keys in order to determine if they want to manage the keys for certain volumes and then apply their own key lifecycle management to the selected block volumes.",
+ "Impact": "Encrypting with a Customer Managed Keys requires a Vault and a Customer Master Key. In addition, you must authorize the Block Volume service to use the keys you create.\nRequired IAM Policy:\n
\nAllow service blockstorage to use keys in compartment <compartment-id> where target.key.id = '<key_OCID>'\n
",
+ "Remediation": "For each block volumes from the result, assign the encryption key by Selecting the Vault Compartment and Vault, select the Master Encryption Key Compartment and Master Encryption key, click Assign.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "4.2.2": {
+ "Description": "When you launch a virtual machine (VM) or bare metal instance based on a platform image or custom image, a new boot volume for the instance is created in the same compartment. That boot volume is associated with that instance until you terminate the instance. By default, the Oracle service manages the keys that encrypt this boot volume. Boot Volumes can also be encrypted using a customer managed key.",
+ "Rationale": "Encryption of boot volumes provides an additional level of security for your data. Management of encryption keys is critical to protecting and accessing protected data. Customers should identify boot volumes encrypted with Oracle service managed keys in order to determine if they want to manage the keys for certain boot volumes and then apply their own key lifecycle management to the selected boot volumes.",
+ "Impact": "Encrypting with a Customer Managed Keys requires a Vault and a Customer Master Key. In addition, you must authorize the Boot Volume service to use the keys you create.\nRequired IAM Policy:\n
\nAllow service Bootstorage to use keys in compartment <compartment-id> where target.key.id = '<key_OCID>'\n
",
+ "Remediation": "For each boot volumes from the result, assign the encryption key by Selecting the Vault Compartment and Vault, select the Master Encryption Key Compartment and Master Encryption key, click Assign.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "4.3.1": {
+ "Description": "Oracle Cloud Infrastructure File Storage service (FSS) provides a durable, scalable, secure, enterprise-grade network file system. By default, the Oracle service manages the keys that encrypt FSS file systems. FSS file systems can also be encrypted using a customer managed key.",
+ "Rationale": "Encryption of FSS systems provides an additional level of security for your data. Management of encryption keys is critical to protecting and accessing protected data. Customers should identify FSS file systems that are encrypted with Oracle service managed keys in order to determine if they want to manage the keys for certain FSS file systems and then apply their own key lifecycle management to the selected FSS file systems.",
+ "Impact": "Encrypting with a Customer Managed Keys requires a Vault and a Customer Master Key. In addition, you must authorize the File Storage service to use the keys you create.\nRequired IAM Policy:\n
\nAllow service FssOc1Prod to use keys in compartment <compartment-id> where target.key.id = '<key_OCID>'\n
",
+ "Remediation": "For each file storage system from the result, assign the encryption key by Selecting the Vault Compartment and Vault, select the Master Encryption Key Compartment and Master Encryption key, click Assign.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "5.1": {
+ "Description": "When you sign up for Oracle Cloud Infrastructure, Oracle creates your tenancy, which is the root compartment that holds all your cloud resources. You then create additional compartments within the tenancy (root compartment) and corresponding policies to control access to the resources in each compartment.
Compartments allow you to organize and control access to your cloud resources. A compartment is a collection of related resources (such as instances, databases, virtual cloud networks, block volumes) that can be accessed only by certain groups that have been given permission by an administrator.",
+ "Rationale": "Compartments are a logical group that adds an extra layer of isolation, organization and authorization making it harder for unauthorized users to gain access to OCI resources.",
+ "Impact": "Once the compartment is created an OCI IAM policy must be created to allow a group to resources in the compartment otherwise only group with tenancy access will have access.",
+ "Remediation": "Create the new compartment under the root compartment.",
+ "Recommendation": "",
+ "Observation": ""
+ },
+ "5.2": {
+ "Description": "When you create a cloud resource such as an instance, block volume, or cloud network, you must specify to which compartment you want the resource to belong. Placing resources in the root compartment makes it difficult to organize and isolate those resources.",
+ "Rationale": "Placing resources into a compartment will allow you to organize and have more granular access controls to your cloud resources.",
+ "Impact": "Placing a resource in a compartment will impact how you write policies to manage access and organize that resource.",
+ "Remediation": "For each item in the returned results,select Move Resource or More Actions then Move Resource and select compartment except root and choose new then move resources.",
+ "Recommendation": "",
+ "Observation": ""
+ }
+ }
+
+ # MAP Checks
+ self.obp_foundations_checks = {
+ 'Cost_Tracking_Budgets': {'Status': False, 'Findings': [], 'OBP': [], "Documentation": "https://docs.oracle.com/en-us/iaas/Content/Billing/Concepts/budgetsoverview.htm#Budgets_Overview"},
+ 'SIEM_Audit_Log_All_Comps': {'Status': True, 'Findings': [], 'OBP': [], "Documentation": "https://docs.oracle.com/en/solutions/oci-aggregate-logs-siem/index.html"}, # Assuming True
+ 'SIEM_Audit_Incl_Sub_Comp': {'Status': True, 'Findings': [], 'OBP': [], "Documentation": "https://docs.oracle.com/en/solutions/oci-aggregate-logs-siem/index.html"}, # Assuming True
+ 'SIEM_VCN_Flow_Logging': {'Status': None, 'Findings': [], 'OBP': [], "Documentation": "https://docs.oracle.com/en/solutions/oci-aggregate-logs-siem/index.html"},
+ 'SIEM_Write_Bucket_Logs': {'Status': None, 'Findings': [], 'OBP': [], "Documentation": "https://docs.oracle.com/en/solutions/oci-aggregate-logs-siem/index.html"},
+ 'SIEM_Read_Bucket_Logs': {'Status': None, 'Findings': [], 'OBP': [], "Documentation": "https://docs.oracle.com/en/solutions/oci-aggregate-logs-siem/index.html"},
+ 'Networking_Connectivity': {'Status': True, 'Findings': [], 'OBP': [], "Documentation": "https://docs.oracle.com/en-us/iaas/Content/Network/Troubleshoot/drgredundancy.htm"},
+ 'Cloud_Guard_Config': {'Status': None, 'Findings': [], 'OBP': [], "Documentation": ""},
+ }
+ # MAP Regional Data
+ self.__obp_regional_checks = {}
+
+ # CIS monitoring notifications check
+ self.cis_monitoring_checks = {
+ "3.4": [
+ 'com.oraclecloud.identitycontrolplane.createidentityprovider',
+ 'com.oraclecloud.identitycontrolplane.deleteidentityprovider',
+ 'com.oraclecloud.identitycontrolplane.updateidentityprovider'
+ ],
+ "3.5": [
+ 'com.oraclecloud.identitycontrolplane.createidpgroupmapping',
+ 'com.oraclecloud.identitycontrolplane.deleteidpgroupmapping',
+ 'com.oraclecloud.identitycontrolplane.updateidpgroupmapping'
+ ],
+ "3.6": [
+ 'com.oraclecloud.identitycontrolplane.creategroup',
+ 'com.oraclecloud.identitycontrolplane.deletegroup',
+ 'com.oraclecloud.identitycontrolplane.updategroup'
+ ],
+ "3.7": [
+ 'com.oraclecloud.identitycontrolplane.createpolicy',
+ 'com.oraclecloud.identitycontrolplane.deletepolicy',
+ 'com.oraclecloud.identitycontrolplane.updatepolicy'
+ ],
+ "3.8": [
+ 'com.oraclecloud.identitycontrolplane.createuser',
+ 'com.oraclecloud.identitycontrolplane.deleteuser',
+ 'com.oraclecloud.identitycontrolplane.updateuser',
+ 'com.oraclecloud.identitycontrolplane.updateusercapabilities',
+ 'com.oraclecloud.identitycontrolplane.updateuserstate'
+ ],
+ "3.9": [
+ 'com.oraclecloud.virtualnetwork.createvcn',
+ 'com.oraclecloud.virtualnetwork.deletevcn',
+ 'com.oraclecloud.virtualnetwork.updatevcn'
+ ],
+ "3.10": [
+ 'com.oraclecloud.virtualnetwork.changeroutetablecompartment',
+ 'com.oraclecloud.virtualnetwork.createroutetable',
+ 'com.oraclecloud.virtualnetwork.deleteroutetable',
+ 'com.oraclecloud.virtualnetwork.updateroutetable'
+ ],
+ "3.11": [
+ 'com.oraclecloud.virtualnetwork.changesecuritylistcompartment',
+ 'com.oraclecloud.virtualnetwork.createsecuritylist',
+ 'com.oraclecloud.virtualnetwork.deletesecuritylist',
+ 'com.oraclecloud.virtualnetwork.updatesecuritylist'
+ ],
+ "3.12": [
+ 'com.oraclecloud.virtualnetwork.changenetworksecuritygroupcompartment',
+ 'com.oraclecloud.virtualnetwork.createnetworksecuritygroup',
+ 'com.oraclecloud.virtualnetwork.deletenetworksecuritygroup',
+ 'com.oraclecloud.virtualnetwork.updatenetworksecuritygroup'
+ ],
+ "3.13": [
+ 'com.oraclecloud.virtualnetwork.createdrg',
+ 'com.oraclecloud.virtualnetwork.deletedrg',
+ 'com.oraclecloud.virtualnetwork.updatedrg',
+ 'com.oraclecloud.virtualnetwork.createdrgattachment',
+ 'com.oraclecloud.virtualnetwork.deletedrgattachment',
+ 'com.oraclecloud.virtualnetwork.updatedrgattachment',
+ 'com.oraclecloud.virtualnetwork.changeinternetgatewaycompartment',
+ 'com.oraclecloud.virtualnetwork.createinternetgateway',
+ 'com.oraclecloud.virtualnetwork.deleteinternetgateway',
+ 'com.oraclecloud.virtualnetwork.updateinternetgateway',
+ 'com.oraclecloud.virtualnetwork.changelocalpeeringgatewaycompartment',
+ 'com.oraclecloud.virtualnetwork.createlocalpeeringgateway',
+ 'com.oraclecloud.virtualnetwork.deletelocalpeeringgateway.end',
+ 'com.oraclecloud.virtualnetwork.updatelocalpeeringgateway',
+ 'com.oraclecloud.natgateway.changenatgatewaycompartment',
+ 'com.oraclecloud.natgateway.createnatgateway',
+ 'com.oraclecloud.natgateway.deletenatgateway',
+ 'com.oraclecloud.natgateway.updatenatgateway',
+ 'com.oraclecloud.servicegateway.attachserviceid',
+ 'com.oraclecloud.servicegateway.changeservicegatewaycompartment',
+ 'com.oraclecloud.servicegateway.createservicegateway',
+ 'com.oraclecloud.servicegateway.deleteservicegateway.end',
+ 'com.oraclecloud.servicegateway.detachserviceid',
+ 'com.oraclecloud.servicegateway.updateservicegateway'
+
+ ]
+ }
+
+ # CIS IAM check
+ self.cis_iam_checks = {
+ "1.3": {"targets": ["target.group.name!=Administrators"]},
+ "1.13": {"resources": ["fnfunc", "instance", "autonomousdatabase", "resource.compartment.id"]},
+ "1.14": {
+ "all-resources": [
+ "request.permission!=BUCKET_DELETE", "request.permission!=OBJECT_DELETE", "request.permission!=EXPORT_SET_DELETE",
+ "request.permission!=MOUNT_TARGET_DELETE", "request.permission!=FILE_SYSTEM_DELETE", "request.permission!=VOLUME_BACKUP_DELETE",
+ "request.permission!=VOLUME_DELETE", "request.permission!=FILE_SYSTEM_DELETE_SNAPSHOT"
+ ],
+ "file-family": [
+ "request.permission!=EXPORT_SET_DELETE", "request.permission!=MOUNT_TARGET_DELETE",
+ "request.permission!=FILE_SYSTEM_DELETE", "request.permission!=FILE_SYSTEM_DELETE_SNAPSHOT"
+ ],
+ "file-systems": ["request.permission!=FILE_SYSTEM_DELETE", "request.permission!=FILE_SYSTEM_DELETE_SNAPSHOT"],
+ "mount-targets": ["request.permission!=MOUNT_TARGET_DELETE"],
+ "object-family": ["request.permission!=BUCKET_DELETE", "request.permission!=OBJECT_DELETE"],
+ "buckets": ["request.permission!=BUCKET_DELETE"],
+ "objects": ["request.permission!=OBJECT_DELETE"],
+ "volume-family": ["request.permission!=VOLUME_BACKUP_DELETE", "request.permission!=VOLUME_DELETE", "request.permission!=BOOT_VOLUME_BACKUP_DELETE"],
+ "volumes": ["request.permission!=VOLUME_DELETE"],
+ "volume-backups": ["request.permission!=VOLUME_BACKUP_DELETE"],
+ "boot-volume-backups": ["request.permission!=BOOT_VOLUME_BACKUP_DELETE"]},
+ "1.14-storage-admin": {
+ "all-resources": [
+ "request.permission=BUCKET_DELETE", "request.permission=OBJECT_DELETE", "request.permission=EXPORT_SET_DELETE",
+ "request.permission=MOUNT_TARGET_DELETE", "request.permission=FILE_SYSTEM_DELETE", "request.permission=VOLUME_BACKUP_DELETE",
+ "request.permission=VOLUME_DELETE", "request.permission=FILE_SYSTEM_DELETE_SNAPSHOT"
+ ],
+ "file-family": [
+ "request.permission=EXPORT_SET_DELETE", "request.permission=MOUNT_TARGET_DELETE",
+ "request.permission=FILE_SYSTEM_DELETE", "request.permission=FILE_SYSTEM_DELETE_SNAPSHOT"
+ ],
+ "file-systems": ["request.permission=FILE_SYSTEM_DELETE", "request.permission=FILE_SYSTEM_DELETE_SNAPSHOT"],
+ "mount-targets": ["request.permission=MOUNT_TARGET_DELETE"],
+ "object-family": ["request.permission=BUCKET_DELETE", "request.permission=OBJECT_DELETE"],
+ "buckets": ["request.permission=BUCKET_DELETE"],
+ "objects": ["request.permission=OBJECT_DELETE"],
+ "volume-family": ["request.permission=VOLUME_BACKUP_DELETE", "request.permission=VOLUME_DELETE", "request.permission=BOOT_VOLUME_BACKUP_DELETE"],
+ "volumes": ["request.permission=VOLUME_DELETE"],
+ "volume-backups": ["request.permission=VOLUME_BACKUP_DELETE"],
+ "boot-volume-backups": ["request.permission=BOOT_VOLUME_BACKUP_DELETE"]}}
+
+ # Tenancy Data
+ self.__tenancy = None
+ self.__cloud_guard_config = None
+ self.__cloud_guard_config_status = None
+ self.__os_namespace = None
+
+ # For IAM Checks
+ self.__tenancy_password_policy = None
+ self.__compartments = []
+ self.__raw_compartment = []
+ self.__policies = []
+ self.__users = []
+ self.__groups_to_users = []
+ self.__tag_defaults = []
+ self.__dynamic_groups = []
+ self.__identity_domains = []
+
+ # For Networking checks
+ self.__network_security_groups = []
+ self.__network_security_lists = []
+ self.__network_subnets = []
+ self.__network_fastconnects = {} # Indexed by DRG ID
+ self.__network_drgs = {} # Indexed by DRG ID
+ self.__raw_network_drgs = []
+
+ self.__network_cpes = []
+ self.__network_ipsec_connections = {} # Indexed by DRG ID
+ self.__network_drg_attachments = {} # Indexed by DRG ID
+
+ # For Autonomous Database Checks
+ self.__autonomous_databases = []
+
+ # For Oracle Analytics Cloud Checks
+ self.__analytics_instances = []
+
+ # For Oracle Integration Cloud Checks
+ self.__integration_instances = []
+
+ # For Logging & Monitoring checks
+ self.__event_rules = []
+ self.__logging_list = []
+ self.__subnet_logs = {}
+ self.__write_bucket_logs = {}
+ self.__read_bucket_logs = {}
+ self.__load_balancer_access_logs = []
+ self.__load_balancer_error_logs = []
+ self.__api_gateway_access_logs = []
+ self.__api_gateway_error_logs = []
+
+ # Cloud Guard checks
+ self.__cloud_guard_targets = {}
+
+ # For Storage Checks
+ self.__buckets = []
+ self.__boot_volumes = []
+ self.__block_volumes = []
+ self.__file_storage_system = []
+
+ # For Vaults and Keys checks
+ self.__vaults = []
+
+ # For Region
+ self.__regions = {}
+ self.__raw_regions = []
+ self.__home_region = None
+
+ # For ONS Subscriptions
+ self.__subscriptions = []
+
+ # Results from Advanced search query
+ self.__resources_in_root_compartment = []
+
+ # For Budgets
+ self.__budgets = []
+
+ # For Service Connector
+ self.__service_connectors = {}
+
+ # Error Data
+ self.__errors = []
+
+ # Setting list of regions to run in
+
+ # Start print time info
+ show_version(verbose=True)
+ print("\nStarts at " + self.start_time_str)
+ self.__config = config
+ self.__signer = signer
+
+ # By Default it is passed True to print all output
+ if print_to_screen.upper() == 'TRUE':
+ self.__print_to_screen = True
+ else:
+ self.__print_to_screen = False
+
+ ## By Default debugging is disabled by default
+ global DEBUG
+ DEBUG = debug
+
+ # creating list of regions to run
+ try:
+ if regions_to_run_in:
+ self.__regions_to_run_in = regions_to_run_in.split(",")
+ self.__run_in_all_regions = False
+ else:
+ # If no regions are passed I will run them in all
+ self.__regions_to_run_in = regions_to_run_in
+ self.__run_in_all_regions = True
+ print("\nRegions to run in: " + ("all regions" if self.__run_in_all_regions else str(self.__regions_to_run_in)))
+
+ except Exception:
+ raise RuntimeError("Invalid input regions must be comma separated with no : 'us-ashburn-1,us-phoenix-1'")
+
+ try:
+
+ self.__identity = oci.identity.IdentityClient(
+ self.__config, signer=self.__signer)
+ if proxy:
+ self.__identity.base_client.session.proxies = {'https': proxy}
+
+ # Getting Tenancy Data and Region data
+ self.__tenancy = self.__identity.get_tenancy(
+ config["tenancy"]).data
+ regions = self.__identity.list_region_subscriptions(
+ self.__tenancy.id).data
+
+ except Exception as e:
+ raise RuntimeError("Failed to get identity information." + str(e.args))
+
+ try:
+ #Find the budget home region to ensure the budget client is run against the home region
+ budget_home_region = next(
+ (obj.region_name for obj in regions if obj.is_home_region),None)
+ budget_config = self.__config.copy()
+ budget_config["region"] = budget_home_region
+
+ self.__budget_client = oci.budget.BudgetClient(
+ budget_config, signer=self.__signer)
+ if proxy:
+ self.__budget_client.base_client.session.proxies = {'https': proxy}
+ except Exception as e:
+ raise RuntimeError("Failed to get create budgets client" + str(e.args))
+
+ # Creating a record for home region and a list of all regions including the home region
+ for region in regions:
+ if region.is_home_region:
+ self.__home_region = region.region_name
+ print("Home region for tenancy is " + self.__home_region)
+ if self.__home_region != self.__config['region']:
+ print_header("It is recommended to run the CIS Complaince script in your home region")
+ print_header("The current region is: " + self.__config['region'])
+
+ self.__regions[region.region_name] = {
+ "is_home_region": region.is_home_region,
+ "region_key": region.region_key,
+ "region_name": region.region_name,
+ "status": region.status,
+ "identity_client": self.__identity,
+ "budget_client": self.__budget_client
+ }
+ elif region.region_name in self.__regions_to_run_in or self.__run_in_all_regions:
+ self.__regions[region.region_name] = {
+ "is_home_region": region.is_home_region,
+ "region_key": region.region_key,
+ "region_name": region.region_name,
+ "status": region.status,
+ }
+
+ record = {
+ "is_home_region": region.is_home_region,
+ "region_key": region.region_key,
+ "region_name": region.region_name,
+ "status": region.status,
+ }
+ self.__raw_regions.append(record)
+
+ # By Default it is today's date
+ if report_directory:
+ self.__report_directory = report_directory + "/"
+ else:
+ self.__report_directory = self.__tenancy.name + "-" + self.report_datetime
+
+ # Creating signers and config for all regions
+ self.__create_regional_signers(proxy)
+
+ # Setting os_namespace based on home region
+ try:
+ if not (self.__os_namespace):
+ self.__os_namespace = self.__regions[self.__home_region]['os_client'].get_namespace().data
+ except Exception as e:
+ raise RuntimeError(
+ "Failed to get tenancy namespace." + str(e.args))
+
+ # Determining if a need a object storage client for output
+ self.__output_bucket = output_bucket
+ if self.__output_bucket:
+ self.__output_bucket_client = self.__regions[self.__home_region]['os_client']
+
+ # Determining if all raw data will be output
+ self.__output_raw_data = raw_data
+
+ # Determining if OCI Best Practices will be checked and output
+ self.__obp_checks = obp
+
+ # Determining if CSV report OCIDs will be redacted
+ self.__redact_output = redact_output
+
+ ##########################################################################
+ # Create regional config, signers adds appends them to self.__regions object
+ ##########################################################################
+ def __create_regional_signers(self, proxy):
+ print("Creating regional signers and configs...")
+ for region_key, region_values in self.__regions.items():
+ # Creating regional configs and signers
+ region_signer = self.__signer
+ region_signer.region_name = region_key
+ region_config = self.__config
+ region_config['region'] = region_key
+
+ try:
+ identity = oci.identity.IdentityClient(region_config, signer=region_signer)
+ if proxy:
+ identity.base_client.session.proxies = {'https': proxy}
+ region_values['identity_client'] = identity
+
+ audit = oci.audit.AuditClient(region_config, signer=region_signer)
+ if proxy:
+ audit.base_client.session.proxies = {'https': proxy}
+ region_values['audit_client'] = audit
+
+ cloud_guard = oci.cloud_guard.CloudGuardClient(region_config, signer=region_signer)
+ if proxy:
+ cloud_guard.base_client.session.proxies = {'https': proxy}
+ region_values['cloud_guard_client'] = cloud_guard
+
+ search = oci.resource_search.ResourceSearchClient(region_config, signer=region_signer)
+ if proxy:
+ search.base_client.session.proxies = {'https': proxy}
+ region_values['search_client'] = search
+
+ network = oci.core.VirtualNetworkClient(region_config, signer=region_signer)
+ if proxy:
+ network.base_client.session.proxies = {'https': proxy}
+ region_values['network_client'] = network
+
+ events = oci.events.EventsClient(region_config, signer=region_signer)
+ if proxy:
+ events.base_client.session.proxies = {'https': proxy}
+ region_values['events_client'] = events
+
+ logging = oci.logging.LoggingManagementClient(region_config, signer=region_signer)
+ if proxy:
+ logging.base_client.session.proxies = {'https': proxy}
+ region_values['logging_client'] = logging
+
+ os_client = oci.object_storage.ObjectStorageClient(region_config, signer=region_signer)
+ if proxy:
+ os_client.base_client.session.proxies = {'https': proxy}
+ region_values['os_client'] = os_client
+
+ vault = oci.key_management.KmsVaultClient(region_config, signer=region_signer)
+ if proxy:
+ vault.session.proxies = {'https': proxy}
+ region_values['vault_client'] = vault
+
+ ons_subs = oci.ons.NotificationDataPlaneClient(region_config, signer=region_signer)
+ if proxy:
+ ons_subs.session.proxies = {'https': proxy}
+ region_values['ons_subs_client'] = ons_subs
+
+ adb = oci.database.DatabaseClient(region_config, signer=region_signer)
+ if proxy:
+ adb.base_client.session.proxies = {'https': proxy}
+ region_values['adb_client'] = adb
+
+ oac = oci.analytics.AnalyticsClient(region_config, signer=region_signer)
+ if proxy:
+ oac.base_client.session.proxies = {'https': proxy}
+ region_values['oac_client'] = oac
+
+ oic = oci.integration.IntegrationInstanceClient(region_config, signer=region_signer)
+ if proxy:
+ oic.base_client.session.proxies = {'https': proxy}
+ region_values['oic_client'] = oic
+
+ bv = oci.core.BlockstorageClient(region_config, signer=region_signer)
+ if proxy:
+ bv.base_client.session.proxies = {'https': proxy}
+ region_values['bv_client'] = bv
+
+ fss = oci.file_storage.FileStorageClient(region_config, signer=region_signer)
+ if proxy:
+ fss.base_client.session.proxies = {'https': proxy}
+ region_values['fss_client'] = fss
+
+ sch = oci.sch.ServiceConnectorClient(region_config, signer=region_signer)
+ if proxy:
+ sch.base_client.session.proxies = {'https': proxy}
+ region_values['sch_client'] = sch
+
+ except Exception as e:
+ raise RuntimeError("Failed to create regional clients for data collection: " + str(e))
+
+ ##########################################################################
+ # Check for Managed PaaS Compartment
+ ##########################################################################
+ def __if_not_managed_paas_compartment(self, name):
+ return name != "ManagedCompartmentForPaaS"
+
+ ##########################################################################
+ # Set ManagementCompartment ID
+ ##########################################################################
+ def __set_managed_paas_compartment(self):
+ self.__managed_paas_compartment_id = ""
+ for compartment in self.__compartments:
+ if compartment.name == "ManagedCompartmentForPaaS":
+ self.__managed_paas_compartment_id = compartment.id
+
+ ##########################################################################
+ # Load compartments
+ ##########################################################################
+ def __identity_read_compartments(self):
+ print("\nProcessing Compartments...")
+ try:
+ self.__compartments = oci.pagination.list_call_get_all_results(
+ self.__regions[self.__home_region]['identity_client'].list_compartments,
+ compartment_id=self.__tenancy.id,
+ compartment_id_in_subtree=True,
+ lifecycle_state="ACTIVE"
+ ).data
+
+ # Need to convert for raw output
+ for compartment in self.__compartments:
+ deep_link = self.__oci_compartment_uri + compartment.id
+ record = {
+ 'id': compartment.id,
+ 'name': compartment.name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, compartment.name),
+ 'compartment_id': compartment.compartment_id,
+ 'defined_tags': compartment.defined_tags,
+ "description": compartment.description,
+ "freeform_tags": compartment.freeform_tags,
+ "inactive_status": compartment.inactive_status,
+ "is_accessible": compartment.is_accessible,
+ "lifecycle_state": compartment.lifecycle_state,
+ "time_created": compartment.time_created.strftime(self.__iso_time_format),
+ "region": ""
+ }
+ self.__raw_compartment.append(record)
+ self.cis_foundations_benchmark_1_2['5.1']['Total'].append(compartment)
+
+ # Add root compartment which is not part of the list_compartments
+ self.__compartments.append(self.__tenancy)
+ deep_link = self.__oci_compartment_uri + compartment.id
+ root_compartment = {
+ "id": self.__tenancy.id,
+ "name": self.__tenancy.name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, self.__tenancy.name),
+ "compartment_id": "(root)",
+ "defined_tags": self.__tenancy.defined_tags,
+ "description": self.__tenancy.description,
+ "freeform_tags": self.__tenancy.freeform_tags,
+ "inactive_status": "",
+ "is_accessible": "",
+ "lifecycle_state": "",
+ "time_created": "",
+ "region": ""
+
+ }
+ self.__raw_compartment.append(root_compartment)
+
+ self.__set_managed_paas_compartment()
+
+ print("\tProcessed " + str(len(self.__compartments)) + " Compartments")
+ return self.__compartments
+
+ except Exception as e:
+ raise RuntimeError(
+ "Error in identity_read_compartments: " + str(e.args))
+
+ ##########################################################################
+ # Load Identity Domains
+ ##########################################################################
+ def __identity_read_domains(self):
+ print("Processing Identity Domains...")
+ raw_identity_domains = []
+ # Finding all Identity Domains in the tenancy
+ for compartment in self.__compartments:
+ try:
+ debug("__identity_read_domains: Getting Identity Domains for Compartment :" + str(compartment.name))
+
+ raw_identity_domains += oci.pagination.list_call_get_all_results(
+ self.__regions[self.__home_region]['identity_client'].list_domains,
+ compartment_id = compartment.id,
+ lifecycle_state = "ACTIVE"
+ ).data
+ # If this succeeds it is likely there are identity Domains
+ self.__identity_domains_enabled = True
+
+ except Exception as e:
+ debug("__identity_read_domains: Exception collecting Identity Domains \n" + str(e))
+ # If this fails the tenancy likely doesn't have identity domains or the permissions are off
+ break
+
+ # Check if tenancy has Identity Domains otherwise breaking out
+ if not(raw_identity_domains):
+ self.__identity_domains_enabled = False
+ return self.__identity_domains_enabled
+
+ for domain in raw_identity_domains:
+ debug("__identity_read_domains: Getting passowrd policy for domain: " + domain.display_name)
+ domain_dict = oci.util.to_dict(domain)
+ try:
+ debug("__identity_read_domains: Getting Identity Domain Password Policy")
+ idcs_url = domain.url + "/admin/v1/PasswordPolicies/PasswordPolicy"
+ raw_pwd_policy_resp = requests.get(url=idcs_url, auth=self.__signer)
+ raw_pwd_policy_dict = json.loads(raw_pwd_policy_resp.content)
+
+ pwd_policy_dict = oci.util.to_dict(oci.identity_domains.IdentityDomainsClient(\
+ config=self.__config, service_endpoint=domain.url).get_password_policy(\
+ password_policy_id=raw_pwd_policy_dict['ocid']).data)
+
+ domain_dict['password_policy'] = pwd_policy_dict
+ domain_dict['errors'] = None
+ except Exception as e:
+ debug("Identity Domains Error is " + str(e))
+ domain_dict['password_policy'] = None
+ domain_dict['errors'] = str(e)
+
+ self.__identity_domains.append(domain_dict)
+
+ else:
+ self.__identity_domains_enabled = True
+ ("\tProcessed " + str(len(self.__identity_domains)) + " Identity Domains")
+ return self.__identity_domains_enabled
+
+ ##########################################################################
+ # Load Groups and Group membership
+ ##########################################################################
+ def __identity_read_groups_and_membership(self):
+ try:
+ # Getting all Groups in the Tenancy
+ debug("processing __identity_read_groups_and_membership ")
+ groups_data = oci.pagination.list_call_get_all_results(
+ self.__regions[self.__home_region]['identity_client'].list_groups,
+ compartment_id=self.__tenancy.id
+ ).data
+ # For each group in the tenacy getting the group's membership
+ for grp in groups_data:
+ debug("__identity_read_groups_and_membership: reading group data " + str(grp.name))
+ membership = oci.pagination.list_call_get_all_results(
+ self.__regions[self.__home_region]['identity_client'].list_user_group_memberships,
+ compartment_id=self.__tenancy.id,
+ group_id=grp.id).data
+ # For empty groups just print one record with the group info
+ grp_deep_link = self.__oci_groups_uri + grp.id
+ if not membership:
+ group_record = {
+ "id": grp.id,
+ "name": grp.name,
+ "deep_link": self.__generate_csv_hyperlink(grp_deep_link, grp.name),
+ "description": grp.description,
+ "lifecycle_state": grp.lifecycle_state,
+ "time_created": grp.time_created.strftime(self.__iso_time_format),
+ "user_id": "",
+ "user_id_link": ""
+ }
+ # Adding a record per empty group
+ self.__groups_to_users.append(group_record)
+ # For groups with members print one record per user per group
+ for member in membership:
+ debug("__identity_read_groups_and_membership: reading members data in group" + str(grp.name))
+ user_deep_link = self.__oci_users_uri + member.user_id
+ group_record = {
+ "id": grp.id,
+ "name": grp.name,
+ "deep_link": self.__generate_csv_hyperlink(grp_deep_link, grp.name),
+ "description": grp.description,
+ "lifecycle_state": grp.lifecycle_state,
+ "time_created": grp.time_created.strftime(self.__iso_time_format),
+ "user_id": member.user_id,
+ "user_id_link": self.__generate_csv_hyperlink(user_deep_link, member.user_id)
+ }
+ # Adding a record per user to group
+ self.__groups_to_users.append(group_record)
+ return self.__groups_to_users
+ except Exception as e:
+ self.__errors.append({"id" : "__identity_read_groups_and_membership", "error" : str(e)})
+ debug("__identity_read_groups_and_membership: error reading" + str(e))
+ RuntimeError(
+ "Error in __identity_read_groups_and_membership" + str(e.args))
+
+ ##########################################################################
+ # Load users
+ ##########################################################################
+ def __identity_read_users(self):
+ try:
+ # Getting all users in the Tenancy
+ users_data = oci.pagination.list_call_get_all_results(
+ self.__regions[self.__home_region]['identity_client'].list_users,
+ compartment_id=self.__tenancy.id
+ ).data
+
+ # Adding record to the users
+ for user in users_data:
+ deep_link = self.__oci_users_uri + user.id
+ record = {
+ 'id': user.id,
+ 'name': user.name,
+ 'deep_link': self.__generate_csv_hyperlink(deep_link, user.name),
+ 'defined_tags': user.defined_tags,
+ 'description': user.description,
+ 'email': user.email,
+ 'email_verified': user.email_verified,
+ 'external_identifier': user.external_identifier,
+ 'identity_provider_id': user.identity_provider_id,
+ 'is_mfa_activated': user.is_mfa_activated,
+ 'lifecycle_state': user.lifecycle_state,
+ 'time_created': user.time_created.strftime(self.__iso_time_format),
+ 'can_use_api_keys': user.capabilities.can_use_api_keys,
+ 'can_use_auth_tokens': user.capabilities.can_use_auth_tokens,
+ 'can_use_console_password': user.capabilities.can_use_console_password,
+ 'can_use_customer_secret_keys': user.capabilities.can_use_customer_secret_keys,
+ 'can_use_db_credentials': user.capabilities.can_use_db_credentials,
+ 'can_use_o_auth2_client_credentials': user.capabilities.can_use_o_auth2_client_credentials,
+ 'can_use_smtp_credentials': user.capabilities.can_use_smtp_credentials,
+ 'groups': []
+ }
+ # Adding Groups to the user
+ for group in self.__groups_to_users:
+ if user.id == group['user_id']:
+ record['groups'].append(group['name'])
+
+ record['api_keys'] = self.__identity_read_user_api_key(user.id)
+ record['auth_tokens'] = self.__identity_read_user_auth_token(
+ user.id)
+ record['customer_secret_keys'] = self.__identity_read_user_customer_secret_key(
+ user.id)
+
+ self.__users.append(record)
+ print("\tProcessed " + str(len(self.__users)) + " Users")
+ return self.__users
+
+ except Exception as e:
+ debug("__identity_read_users: User ID is: " + str(user))
+ raise RuntimeError(
+ "Error in __identity_read_users: " + str(e.args))
+
+ ##########################################################################
+ # Load user api keys
+ ##########################################################################
+ def __identity_read_user_api_key(self, user_ocid):
+ api_keys = []
+ try:
+ user_api_keys_data = oci.pagination.list_call_get_all_results(
+ self.__regions[self.__home_region]['identity_client'].list_api_keys,
+ user_id=user_ocid
+ ).data
+
+ for api_key in user_api_keys_data:
+ deep_link = self.__oci_users_uri + user_ocid + "/api-keys"
+ record = {
+ 'id': api_key.key_id,
+ 'fingerprint': api_key.fingerprint,
+ 'deep_link': self.__generate_csv_hyperlink(deep_link, api_key.fingerprint),
+ 'inactive_status': api_key.inactive_status,
+ 'lifecycle_state': api_key.lifecycle_state,
+ 'time_created': api_key.time_created.strftime(self.__iso_time_format),
+ }
+ api_keys.append(record)
+
+ return api_keys
+
+ except Exception as e:
+ self.__errors.append({"id" : user_ocid, "error" : "Failed to API Keys for User ID"})
+ debug("__identity_read_user_api_key: Failed to API Keys for User ID: " + user_ocid)
+ return api_keys
+ raise RuntimeError(
+ "Error in identity_read_user_api_key: " + str(e.args))
+
+ ##########################################################################
+ # Load user auth tokens
+ ##########################################################################
+ def __identity_read_user_auth_token(self, user_ocid):
+ auth_tokens = []
+ try:
+ auth_tokens_data = oci.pagination.list_call_get_all_results(
+ self.__regions[self.__home_region]['identity_client'].list_auth_tokens,
+ user_id=user_ocid
+ ).data
+
+ for token in auth_tokens_data:
+ deep_link = self.__oci_users_uri + user_ocid + "/swift-credentials"
+ record = {
+ 'id': token.id,
+ 'description': token.description,
+ 'deep_link': self.__generate_csv_hyperlink(deep_link, token.description),
+ 'inactive_status': token.inactive_status,
+ 'lifecycle_state': token.lifecycle_state,
+ # .strftime('%Y-%m-%d %H:%M:%S'),
+ 'time_created': token.time_created.strftime(self.__iso_time_format),
+ 'time_expires': str(token.time_expires),
+ 'token': token.token
+
+ }
+ auth_tokens.append(record)
+
+ return auth_tokens
+
+ except Exception as e:
+ self.__errors.append({"id" : user_ocid, "error" : "Failed to auth tokens for User ID"})
+ debug("__identity_read_user_auth_token: Failed to auth tokens for User ID: " + user_ocid)
+ return auth_tokens
+ raise RuntimeError(
+ "Error in identity_read_user_auth_token: " + str(e.args))
+
+ ##########################################################################
+ # Load user customer secret key
+ ##########################################################################
+ def __identity_read_user_customer_secret_key(self, user_ocid):
+ customer_secret_key = []
+ try:
+ customer_secret_key_data = oci.pagination.list_call_get_all_results(
+ self.__regions[self.__home_region]['identity_client'].list_customer_secret_keys,
+ user_id=user_ocid
+ ).data
+
+ for key in customer_secret_key_data:
+ deep_link = self.__oci_users_uri + user_ocid + "/secret-keys"
+ record = {
+ 'id': key.id,
+ 'display_name': key.display_name,
+ 'deep_link': self.__generate_csv_hyperlink(deep_link, key.display_name),
+ 'inactive_status': key.inactive_status,
+ 'lifecycle_state': key.lifecycle_state,
+ 'time_created': key.time_created.strftime(self.__iso_time_format),
+ 'time_expires': str(key.time_expires),
+
+ }
+ customer_secret_key.append(record)
+
+ return customer_secret_key
+
+ except Exception as e:
+ self.__errors.append({"id" : user_ocid, "error" : "Failed to customer secrets for User ID"})
+ debug("__identity_read_user_customer_secret_key: Failed to customer secrets for User ID: " + user_ocid)
+ return customer_secret_key
+ raise RuntimeError(
+ "Error in identity_read_user_customer_secret_key: " + str(e.args))
+
+ ##########################################################################
+ # Tenancy IAM Policies
+ ##########################################################################
+ def __identity_read_tenancy_policies(self):
+ try:
+ policies_data = oci.pagination.list_call_get_all_results(
+ self.__regions[self.__home_region]['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query Policy resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ for policy in policies_data:
+ deep_link = self.__oci_policies_uri + policy.identifier
+ record = {
+ "id": policy.identifier,
+ "name": policy.display_name,
+ 'deep_link': self.__generate_csv_hyperlink(deep_link, policy.display_name),
+ "compartment_id": policy.compartment_id,
+ "description": policy.additional_details['description'],
+ "lifecycle_state": policy.lifecycle_state,
+ "statements": policy.additional_details['statements']
+ }
+ self.__policies.append(record)
+ print("\tProcessed " + str(len(self.__policies)) + " IAM Policies")
+ return self.__policies
+
+ except Exception as e:
+ raise RuntimeError("Error in __identity_read_tenancy_policies: " + str(e.args))
+
+ ############################################
+ # Load Identity Dynamic Groups
+ ############################################
+ def __identity_read_dynamic_groups(self):
+ try:
+ dynamic_groups_data = oci.pagination.list_call_get_all_results(
+ self.__regions[self.__home_region]['identity_client'].list_dynamic_groups,
+ compartment_id=self.__tenancy.id).data
+ for dynamic_group in dynamic_groups_data:
+ deep_link = self.__oci_dynamic_groups_uri + dynamic_group.id
+ # try:
+ record = {
+ "id": dynamic_group.id,
+ "name": dynamic_group.name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, dynamic_group.name),
+ "description": dynamic_group.description,
+ "matching_rule": dynamic_group.matching_rule,
+ "time_created": dynamic_group.time_created.strftime(self.__iso_time_format),
+ "inactive_status": dynamic_group.inactive_status,
+ "lifecycle_state": dynamic_group.lifecycle_state,
+ "defined_tags": dynamic_group.defined_tags,
+ "freeform_tags": dynamic_group.freeform_tags,
+ "compartment_id": dynamic_group.compartment_id,
+ "notes": ""
+ }
+ # except Exception as e:
+ # record = {
+ # "id": dynamic_group.id,
+ # "name": dynamic_group.name,
+ # "deep_link": self.__generate_csv_hyperlink(deep_link, dynamic_group.name),
+ # "description": "",
+ # "matching_rule": "",
+ # "time_created": "",
+ # "inactive_status": "",
+ # "lifecycle_state": "",
+ # "defined_tags": "",
+ # "freeform_tags": "",
+ # "compartment_id": "",
+ # "notes": str(e)
+ # }
+ self.__dynamic_groups.append(record)
+
+ print("\tProcessed " + str(len(self.__dynamic_groups)) + " Dynamic Groups")
+ return self.__dynamic_groups
+ except Exception as e:
+ raise RuntimeError("Error in __identity_read_dynamic_groups: " + str(e.args))
+ pass
+
+ ############################################
+ # Load Availlability Domains
+ ############################################
+ def __identity_read_availability_domains(self):
+ try:
+ for region_key, region_values in self.__regions.items():
+ region_values['availability_domains'] = oci.pagination.list_call_get_all_results(
+ region_values['identity_client'].list_availability_domains,
+ compartment_id=self.__tenancy.id
+ ).data
+ print("\tProcessed " + str(len(region_values['availability_domains'])) + " Availability Domains in " + region_key)
+
+ except Exception as e:
+ raise RuntimeError(
+ "Error in __identity_read_availability_domains: " + str(e.args))
+
+ ##########################################################################
+ # Get Objects Store Buckets
+ ##########################################################################
+ def __os_read_buckets(self):
+
+ # Getting OS Namespace
+ try:
+ # looping through regions
+ for region_key, region_values in self.__regions.items():
+ buckets_data = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query Bucket resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+ # Getting Bucket Info
+ for bucket in buckets_data:
+ try:
+ bucket_info = region_values['os_client'].get_bucket(
+ bucket.additional_details['namespace'], bucket.display_name).data
+ deep_link = self.__oci_buckets_uri + bucket_info.namespace + "/" + bucket_info.name + "/objects?region=" + region_key
+ record = {
+ "id": bucket_info.id,
+ "name": bucket_info.name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, bucket_info.name),
+ "kms_key_id": bucket_info.kms_key_id,
+ "namespace": bucket_info.namespace,
+ "compartment_id": bucket_info.compartment_id,
+ "object_events_enabled": bucket_info.object_events_enabled,
+ "public_access_type": bucket_info.public_access_type,
+ "replication_enabled": bucket_info.replication_enabled,
+ "is_read_only": bucket_info.is_read_only,
+ "storage_tier": bucket_info.storage_tier,
+ "time_created": bucket_info.time_created.strftime(self.__iso_time_format),
+ "versioning": bucket_info.versioning,
+ "defined_tags": bucket_info.defined_tags,
+ "freeform_tags": bucket_info.freeform_tags,
+ "region": region_key,
+ "notes": ""
+ }
+ self.__buckets.append(record)
+ except Exception as e:
+ record = {
+ "id": "",
+ "name": bucket.display_name,
+ "deep_link": "",
+ "kms_key_id": "",
+ "namespace": bucket.additional_details['namespace'],
+ "compartment_id": bucket.compartment_id,
+ "object_events_enabled": "",
+ "public_access_type": "",
+ "replication_enabled": "",
+ "is_read_only": "",
+ "storage_tier": "",
+ "time_created": bucket.time_created.strftime(self.__iso_time_format),
+ "versioning": "",
+ "defined_tags": bucket.defined_tags,
+ "freeform_tags": "",
+ "region": region_key,
+ "notes": str(e)
+ }
+ self.__buckets.append(record)
+ # Returning Buckets
+ print("\tProcessed " + str(len(self.__buckets)) + " Buckets")
+ return self.__buckets
+ except Exception as e:
+ raise RuntimeError("Error in __os_read_buckets " + str(e.args))
+
+ ############################################
+ # Load Block Volumes
+ ############################################
+ def __block_volume_read_block_volumes(self):
+ try:
+ for region_key, region_values in self.__regions.items():
+ volumes_data = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query Volume resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ # Getting Block Volume inf
+ for volume in volumes_data:
+ deep_link = self.__oci_block_volumes_uri + volume.identifier + '?region=' + region_key
+ try:
+ record = {
+ "id": volume.identifier,
+ "display_name": volume.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, volume.display_name),
+ "kms_key_id": volume.additional_details['kmsKeyId'],
+ "lifecycle_state": volume.lifecycle_state,
+ "compartment_id": volume.compartment_id,
+ "size_in_gbs": volume.additional_details['sizeInGBs'],
+ "size_in_mbs": volume.additional_details['sizeInMBs'],
+ # "source_details": volume.source_details,
+ "time_created": volume.time_created.strftime(self.__iso_time_format),
+ # "volume_group_id": volume.volume_group_id,
+ # "vpus_per_gb": volume.vpus_per_gb,
+ # "auto_tuned_vpus_per_gb": volume.auto_tuned_vpus_per_gb,
+ "availability_domain": volume.availability_domain,
+ # "block_volume_replicas": volume.block_volume_replicas,
+ # "is_auto_tune_enabled": volume.is_auto_tune_enabled,
+ # "is_hydrated": volume.is_hydrated,
+ "defined_tags": volume.defined_tags,
+ "freeform_tags": volume.freeform_tags,
+ "system_tags": volume.system_tags,
+ "region": region_key,
+ "notes": ""
+ }
+ except Exception as e:
+ record = {
+ "id": volume.identifier,
+ "display_name": volume.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, volume.display_name),
+ "kms_key_id": "",
+ "lifecycle_state": "",
+ "compartment_id": "",
+ "size_in_gbs": "",
+ "size_in_mbs": "",
+ # "source_details": "",
+ "time_created": "",
+ # "volume_group_id": "",
+ # "vpus_per_gb": "",
+ # "auto_tuned_vpus_per_gb": "",
+ "availability_domain": "",
+ # "block_volume_replicas": "",
+ # "is_auto_tune_enabled": "",
+ # "is_hydrated": "",
+ "defined_tags": "",
+ "freeform_tags": "",
+ "system_tags": "",
+ "region": region_key,
+ "notes": str(e)
+ }
+ self.__block_volumes.append(record)
+ print("\tProcessed " + str(len(self.__block_volumes)) + " Block Volumes")
+ return self.__block_volumes
+ except Exception as e:
+ raise RuntimeError("Error in __block_volume_read_block_volumes " + str(e.args))
+
+ ############################################
+ # Load Boot Volumes
+ ############################################
+ def __boot_volume_read_boot_volumes(self):
+ try:
+ for region_key, region_values in self.__regions.items():
+ boot_volumes_data = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query BootVolume resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ for boot_volume in boot_volumes_data:
+ deep_link = self.__oci_boot_volumes_uri + boot_volume.identifier + '?region=' + region_key
+ try:
+ record = {
+ "id": boot_volume.identifier,
+ "display_name": boot_volume.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, boot_volume.display_name),
+ # "image_id": boot_volume.image_id,
+ "kms_key_id": boot_volume.additional_details['kmsKeyId'],
+ "lifecycle_state": boot_volume.lifecycle_state,
+ "size_in_gbs": boot_volume.additional_details['sizeInGBs'],
+ "size_in_mbs": boot_volume.additional_details['sizeInMBs'],
+ "availability_domain": boot_volume.availability_domain,
+ "time_created": boot_volume.time_created.strftime(self.__iso_time_format),
+ "compartment_id": boot_volume.compartment_id,
+ # "auto_tuned_vpus_per_gb": boot_volume.auto_tuned_vpus_per_gb,
+ # "boot_volume_replicas": boot_volume.boot_volume_replicas,
+ # "is_auto_tune_enabled": boot_volume.is_auto_tune_enabled,
+ # "is_hydrated": boot_volume.is_hydrated,
+ # "source_details": boot_volume.source_details,
+ # "vpus_per_gb": boot_volume.vpus_per_gb,
+ "system_tags": boot_volume.system_tags,
+ "defined_tags": boot_volume.defined_tags,
+ "freeform_tags": boot_volume.freeform_tags,
+ "region": region_key,
+ "notes": ""
+ }
+ except Exception as e:
+ record = {
+ "id": boot_volume.identifier,
+ "display_name": boot_volume.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, boot_volume.display_name),
+ # "image_id": "",
+ "kms_key_id": "",
+ "lifecycle_state": "",
+ "size_in_gbs": "",
+ "size_in_mbs": "",
+ "availability_domain": "",
+ "time_created": "",
+ "compartment_id": "",
+ # "auto_tuned_vpus_per_gb": "",
+ # "boot_volume_replicas": "",
+ # "is_auto_tune_enabled": "",
+ # "is_hydrated": "",
+ # "source_details": "",
+ # "vpus_per_gb": "",
+ "system_tags": "",
+ "defined_tags": "",
+ "freeform_tags": "",
+ "region": region_key,
+ "notes": str(e)
+ }
+ self.__boot_volumes.append(record)
+ print("\tProcessed " + str(len(self.__boot_volumes)) + " Boot Volumes")
+ return (self.__boot_volumes)
+ except Exception as e:
+ raise RuntimeError("Error in __boot_volume_read_boot_volumes " + str(e.args))
+
+ ############################################
+ # Load FSS
+ ############################################
+ def __fss_read_fsss(self):
+ try:
+ for region_key, region_values in self.__regions.items():
+ fss_data = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query FileSystem resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ for fss in fss_data:
+ deep_link = self.__oci_fss_uri + fss.identifier + '?region=' + region_key
+ try:
+ record = {
+ "id": fss.identifier,
+ "display_name": fss.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, fss.display_name),
+ "kms_key_id": fss.additional_details['kmsKeyId'],
+ "lifecycle_state": fss.lifecycle_state,
+ # "lifecycle_details": fss.lifecycle_details,
+ "availability_domain": fss.availability_domain,
+ "time_created": fss.time_created.strftime(self.__iso_time_format),
+ "compartment_id": fss.compartment_id,
+ # "is_clone_parent": fss.is_clone_parent,
+ # "is_hydrated": fss.is_hydrated,
+ # "metered_bytes": fss.metered_bytes,
+ "source_details": fss.additional_details['sourceDetails'],
+ "defined_tags": fss.defined_tags,
+ "freeform_tags": fss.freeform_tags,
+ "region": region_key,
+ "notes": ""
+ }
+ except Exception as e:
+ record = {
+ "id": fss.identifier,
+ "display_name": fss.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, fss.display_name),
+ "kms_key_id": "",
+ "lifecycle_state": "",
+ # "lifecycle_details": "",
+ "availability_domain": "",
+ "time_created": "",
+ "compartment_id": "",
+ # "is_clone_parent": "",
+ # "is_hydrated": "",
+ # "metered_bytes": "",
+ "source_details": "",
+ "defined_tags": "",
+ "freeform_tags": "",
+ "region": region_key,
+ "notes": str(e)
+ }
+ self.__file_storage_system.append(record)
+ print("\tProcessed " + str(len(self.__file_storage_system)) + " File Storage service")
+ return (self.__file_storage_system)
+ except Exception as e:
+ raise RuntimeError("Error in __fss_read_fsss " + str(e.args))
+
+ ##########################################################################
+ # Network Security Groups
+ ##########################################################################
+ def __network_read_network_security_groups_rules(self):
+ self.__network_security_groups = []
+ # Loopig Through Compartments Except Managed
+ try:
+ for region_key, region_values in self.__regions.items():
+ nsgs_data = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query NetworkSecurityGroup resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ # Looping through NSGs to to get
+ for nsg in nsgs_data:
+ deep_link = self.__oci_networking_uri + nsg.additional_details['vcnId'] + "/network-security-groups/" + nsg.identifier + '?region=' + region_key
+ record = {
+ "id": nsg.identifier,
+ "compartment_id": nsg.compartment_id,
+ "display_name": nsg.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, nsg.display_name),
+ "lifecycle_state": nsg.lifecycle_state,
+ "time_created": nsg.time_created.strftime(self.__iso_time_format),
+ "vcn_id": nsg.additional_details['vcnId'],
+ "freeform_tags": nsg.freeform_tags,
+ "defined_tags": nsg.defined_tags,
+ "region": region_key,
+ "rules": []
+ }
+
+ nsg_rules = oci.pagination.list_call_get_all_results(
+ region_values['network_client'].list_network_security_group_security_rules,
+ network_security_group_id=nsg.identifier
+ ).data
+
+ for rule in nsg_rules:
+ deep_link = self.__oci_networking_uri + nsg.additional_details['vcnId'] + "/network-security-groups/" + nsg.identifier + "/nsg-rules" + '?region=' + region_key
+ rule_record = {
+ "id": rule.id,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, rule.id),
+ "destination": rule.destination,
+ "destination_type": rule.destination_type,
+ "direction": rule.direction,
+ "icmp_options": rule.icmp_options,
+ "is_stateless": rule.is_stateless,
+ "is_valid": rule.is_valid,
+ "protocol": rule.protocol,
+ "source": rule.source,
+ "source_type": rule.source_type,
+ "tcp_options": rule.tcp_options,
+ "time_created": rule.time_created.strftime(self.__iso_time_format),
+ "udp_options": rule.udp_options,
+
+ }
+ # Append NSG Rules to NSG
+ record['rules'].append(rule_record)
+ # Append NSG to list of NSGs
+ self.__network_security_groups.append(record)
+ print("\tProcessed " + str(len(self.__network_security_groups)) + " Network Security Groups")
+ return self.__network_security_groups
+ except Exception as e:
+ raise RuntimeError(
+ "Error in __network_read_network_security_groups_rules " + str(e.args))
+
+ ##########################################################################
+ # Network Security Lists
+ ##########################################################################
+ def __network_read_network_security_lists(self):
+ # Looping Through Compartments Except Managed
+ try:
+ for region_key, region_values in self.__regions.items():
+ security_lists_data = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query SecurityList resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ # Looping through Security Lists to to get
+ for security_list in security_lists_data:
+ deep_link = self.__oci_networking_uri + security_list.additional_details['vcnId'] + \
+ "/security-lists/" + security_list.identifier + '?region=' + region_key
+ record = {
+ "id": security_list.identifier,
+ "compartment_id": security_list.compartment_id,
+ "display_name": security_list.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, security_list.display_name),
+ "lifecycle_state": security_list.lifecycle_state,
+ "time_created": security_list.time_created.strftime(self.__iso_time_format),
+ "vcn_id": security_list.additional_details['vcnId'],
+ "region": region_key,
+ "freeform_tags": security_list.freeform_tags,
+ "defined_tags": security_list.defined_tags,
+ "ingress_security_rules": [],
+ "egress_security_rules": []
+ }
+
+ if security_list.additional_details['egressSecurityRules'] is not None:
+ for i in range(len(security_list.additional_details['egressSecurityRules'])):
+ erule = {
+ # "description": egress_rule.description,
+ "destination": security_list.additional_details['egressSecurityRules'][i]['destination'],
+ # "destination_type": egress_rule.destination_type,
+ "icmp_options": security_list.additional_details['egressSecurityRules'][i]['icmpOptions'],
+ "is_stateless": security_list.additional_details['egressSecurityRules'][i]['isStateless'],
+ "protocol": security_list.additional_details['egressSecurityRules'][i]['protocol'],
+ "tcp_options": security_list.additional_details['egressSecurityRules'][i]['tcpOptions'],
+ "udp_options": security_list.additional_details['egressSecurityRules'][i]['udpOptions']
+ }
+ record['egress_security_rules'].append(erule)
+ if security_list.additional_details['ingressSecurityRules'] is not None:
+ for i in range(len(security_list.additional_details['ingressSecurityRules'])):
+ irule = {
+ # "description": ingress_rule.description,
+ "source": security_list.additional_details['ingressSecurityRules'][i]['source'],
+ # "source_type": ingress_rule.source_type,
+ "icmp_options": security_list.additional_details['ingressSecurityRules'][i]['icmpOptions'],
+ "is_stateless": security_list.additional_details['ingressSecurityRules'][i]['isStateless'],
+ "protocol": security_list.additional_details['ingressSecurityRules'][i]['protocol'],
+ "tcp_options": security_list.additional_details['ingressSecurityRules'][i]['tcpOptions'],
+ "udp_options": security_list.additional_details['ingressSecurityRules'][i]['udpOptions']
+ }
+ record['ingress_security_rules'].append(irule)
+
+ # Append Security List to list of NSGs
+ self.__network_security_lists.append(record)
+
+ print("\tProcessed " + str(len(self.__network_security_lists)) + " Security Lists")
+ return self.__network_security_lists
+ except Exception as e:
+ raise RuntimeError(
+ "Error in __network_read_network_security_lists " + str(e.args))
+
+ ##########################################################################
+ # Network Subnets Lists
+ ##########################################################################
+ def __network_read_network_subnets(self):
+ try:
+ for region_key, region_values in self.__regions.items():
+ subnets_data = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query Subnet resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ try:
+ for subnet in subnets_data:
+ deep_link = self.__oci_networking_uri + subnet.additional_details['vcnId'] + "/subnets/" + subnet.identifier + '?region=' + region_key
+ record = {
+ "id": subnet.identifier,
+ "availability_domain": subnet.availability_domain,
+ "cidr_block": subnet.additional_details['cidrBlock'],
+ "compartment_id": subnet.compartment_id,
+ "dhcp_options_id": subnet.additional_details['dhcpOptionsId'],
+ "display_name": subnet.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, subnet.display_name),
+ "dns_label": subnet.additional_details['dnsLabel'],
+ "ipv6_cidr_block": subnet.additional_details['ipv6CidrBlock'],
+ "ipv6_virtual_router_ip": subnet.additional_details['ipv6VirtualRouterIp'],
+ "lifecycle_state": subnet.lifecycle_state,
+ "prohibit_public_ip_on_vnic": subnet.additional_details['prohibitPublicIpOnVnic'],
+ "route_table_id": subnet.additional_details['routeTableId'],
+ "security_list_ids": subnet.additional_details['securityListIds'],
+ "subnet_domain_name": subnet.additional_details['subnetDomainName'],
+ "time_created": subnet.time_created.strftime(self.__iso_time_format),
+ "vcn_id": subnet.additional_details['vcnId'],
+ "virtual_router_ip": subnet.additional_details['virtualRouterIp'],
+ "virtual_router_mac": subnet.additional_details['virtualRouterMac'],
+ "freeform_tags": subnet.freeform_tags,
+ "define_tags": subnet.defined_tags,
+ "region": region_key,
+ "notes": ""
+
+ }
+ # Adding subnet to subnet list
+ self.__network_subnets.append(record)
+ except Exception as e:
+ deep_link = self.__oci_networking_uri + subnet.additional_details['vcnId'] + "/subnet/" + subnet.identifier + '?region=' + region_key
+ record = {
+ "id": subnet.identifier,
+ "availability_domain": subnet.availability_domain,
+ "cidr_block": subnet.additional_details['cidrBlock'],
+ "compartment_id": subnet.compartment_id,
+ "dhcp_options_id": subnet.additional_details['dhcpOptionsId'],
+ "display_name": subnet.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, subnet.display_name),
+ "dns_label": subnet.additional_details['dnsLabel'],
+ "ipv6_cidr_block": "",
+ "ipv6_virtual_router_ip": "",
+ "lifecycle_state": subnet.lifecycle_state,
+ "prohibit_public_ip_on_vnic": subnet.additional_details['prohibitPublicIpOnVnic'],
+ "route_table_id": subnet.additional_details['routeTableId'],
+ "security_list_ids": subnet.additional_details['securityListIds'],
+ "subnet_domain_name": subnet.additional_details['subnetDomainName'],
+ "time_created": subnet.time_created.strftime(self.__iso_time_format),
+ "vcn_id": subnet.additional_details['vcnId'],
+ "virtual_router_ip": subnet.additional_details['virtualRouterIp'],
+ "virtual_router_mac": subnet.additional_details['virtualRouterMac'],
+ "region": region_key,
+ "notes": str(e)
+
+ }
+ self.__network_subnets.append(record)
+ print("\tProcessed " + str(len(self.__network_subnets)) + " Network Subnets")
+
+ return self.__network_subnets
+ except Exception as e:
+ raise RuntimeError(
+ "Error in __network_read_network_subnets " + str(e.args))
+
+ ##########################################################################
+ # Load DRG Attachments
+ ##########################################################################
+ def __network_read_drg_attachments(self):
+ count_of_drg_attachments = 0
+ try:
+ for region_key, region_values in self.__regions.items():
+ # Looping through compartments in tenancy
+ drg_resources = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query DrgAttachment resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ compartments = set()
+
+ for drg in drg_resources:
+ compartments.add(drg.compartment_id)
+
+ for compartment in compartments:
+ drg_attachment_data = oci.pagination.list_call_get_all_results(
+ region_values['network_client'].list_drg_attachments,
+ compartment_id=compartment,
+ lifecycle_state="ATTACHED",
+ attachment_type="ALL"
+ ).data
+
+ # Looping through DRG Attachments in a compartment
+ for drg_attachment in drg_attachment_data:
+ deep_link = self.__oci_drg_uri + drg_attachment.drg_id + "/drg-attachment/" + drg_attachment.id + '?region=' + region_key
+ try:
+ record = {
+ "id": drg_attachment.id,
+ "display_name": drg_attachment.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, drg_attachment.display_name),
+ "drg_id": drg_attachment.drg_id,
+ "vcn_id": drg_attachment.vcn_id,
+ "drg_route_table_id": str(drg_attachment.drg_route_table_id),
+ "export_drg_route_distribution_id": str(drg_attachment.export_drg_route_distribution_id),
+ "is_cross_tenancy": drg_attachment.is_cross_tenancy,
+ "lifecycle_state": drg_attachment.lifecycle_state,
+ "network_details": drg_attachment.network_details,
+ "network_id": drg_attachment.network_details.id,
+ "network_type": drg_attachment.network_details.type,
+ "freeform_tags": drg_attachment.freeform_tags,
+ "define_tags": drg_attachment.defined_tags,
+ "time_created": drg_attachment.time_created.strftime(self.__iso_time_format),
+ "region": region_key,
+ "notes": ""
+ }
+ except Exception:
+ record = {
+ "id": drg_attachment.id,
+ "display_name": drg_attachment.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, drg_attachment.display_name),
+ "drg_id": drg_attachment.drg_id,
+ "vcn_id": drg_attachment.vcn_id,
+ "drg_route_table_id": str(drg_attachment.drg_route_table_id),
+ "export_drg_route_distribution_id": str(drg_attachment.export_drg_route_distribution_id),
+ "is_cross_tenancy": drg_attachment.is_cross_tenancy,
+ "lifecycle_state": drg_attachment.lifecycle_state,
+ "network_details": drg_attachment.network_details,
+ "network_id": "",
+ "network_type": "",
+ "freeform_tags": drg_attachment.freeform_tags,
+ "define_tags": drg_attachment.defined_tags,
+ "time_created": drg_attachment.time_created.strftime(self.__iso_time_format),
+ "region": region_key,
+ "notes": ""
+ }
+
+ # Adding DRG Attachment to DRG Attachments list
+ try:
+ self.__network_drg_attachments[drg_attachment.drg_id].append(record)
+ except Exception:
+ self.__network_drg_attachments[drg_attachment.drg_id] = []
+ self.__network_drg_attachments[drg_attachment.drg_id].append(record)
+ # Counter
+ count_of_drg_attachments += 1
+
+ print("\tProcessed " + str(count_of_drg_attachments) + " DRG Attachments")
+ return self.__network_drg_attachments
+ except Exception as e:
+ raise RuntimeError(
+ "Error in __network_read_drg_attachments " + str(e.args))
+
+ ##########################################################################
+ # Load DRGs
+ ##########################################################################
+ def __network_read_drgs(self):
+ try:
+ for region_key, region_values in self.__regions.items():
+ # Looping through compartments in tenancy
+ drg_resources = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query Drg resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ compartments = set()
+
+ for drg in drg_resources:
+ compartments.add(drg.compartment_id)
+
+ for compartment in compartments:
+ drg_data = oci.pagination.list_call_get_all_results(
+ region_values['network_client'].list_drgs,
+ compartment_id=compartment,
+ ).data
+ # Looping through DRGs in a compartment
+ for drg in drg_data:
+ deep_link = self.__oci_drg_uri + drg.id + '?region=' + region_key
+ # Fetch DRG Upgrade status
+ try:
+ upgrade_status = region_values['network_client'].get_upgrade_status(drg.id).data.status
+ except Exception:
+ upgrade_status = "Not Available"
+
+ try:
+ record = {
+ "id": drg.id,
+ "display_name": drg.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, drg.display_name),
+ "default_drg_route_tables": drg.default_drg_route_tables,
+ "default_ipsec_tunnel_route_table": drg.default_drg_route_tables.ipsec_tunnel,
+ "default_remote_peering_connection_route_table": drg.default_drg_route_tables.remote_peering_connection,
+ "default_vcn_table": drg.default_drg_route_tables.vcn,
+ "default_virtual_circuit_route_table": drg.default_drg_route_tables.virtual_circuit,
+ "default_export_drg_route_distribution_id": drg.default_export_drg_route_distribution_id,
+ "compartment_id": drg.compartment_id,
+ "lifecycle_state": drg.lifecycle_state,
+ "upgrade_status": upgrade_status,
+ "time_created": drg.time_created.strftime(self.__iso_time_format),
+ "freeform_tags": drg.freeform_tags,
+ "define_tags": drg.defined_tags,
+ "region": region_key,
+ "notes": ""
+ }
+ except Exception as e:
+ record = {
+ "id": drg.id,
+ "display_name": drg.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, drg.display_name),
+ "default_drg_route_tables": drg.default_drg_route_tables,
+ "default_ipsec_tunnel_route_table": "",
+ "default_remote_peering_connection_route_table": "",
+ "default_vcn_table": "",
+ "default_virtual_circuit_route_table": "",
+ "default_export_drg_route_distribution_id": drg.default_export_drg_route_distribution_id,
+ "compartment_id": drg.compartment_id,
+ "lifecycle_state": drg.lifecycle_state,
+ "upgrade_status": upgrade_status,
+ "time_created": drg.time_created.strftime(self.__iso_time_format),
+ "freeform_tags": drg.freeform_tags,
+ "define_tags": drg.defined_tags,
+ "region": region_key,
+ "notes": str(e)
+
+ }
+ # for Raw Data
+ self.__raw_network_drgs.append(record)
+ # For Checks data
+ self.__network_drgs[drg.id] = record
+
+ print("\tProcessed " + str(len(self.__network_drgs)) + " Dynamic Routing Gateways")
+ return self.__network_drgs
+ except Exception as e:
+ raise RuntimeError(
+ "Error in __network_read_drgs " + str(e.args))
+
+ ##########################################################################
+ # Load Network FastConnect
+ ##########################################################################
+ def __network_read_fastonnects(self):
+ try:
+ for region_key, region_values in self.__regions.items():
+ # Looping through compartments in tenancy
+ fastconnects = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query VirtualCircuit resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ compartments = set()
+
+ for vc in fastconnects:
+ compartments.add(vc.compartment_id)
+
+ for compartment in compartments:
+ fastconnect_data = oci.pagination.list_call_get_all_results(
+ region_values['network_client'].list_virtual_circuits,
+ compartment_id=compartment,
+ ).data
+ # lifecycle_state="PROVISIONED"
+ # Looping through fastconnects in a compartment
+ for fastconnect in fastconnect_data:
+ deep_link = self.__oci_fastconnect_uri + fastconnect.id + '?region=' + region_key
+ try:
+ record = {
+ "id": fastconnect.id,
+ "display_name": fastconnect.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, fastconnect.display_name),
+ "bandwidth_shape_name": fastconnect.bandwidth_shape_name,
+ "bgp_admin_state": fastconnect.bgp_admin_state,
+ "bgp_ipv6_session_state": fastconnect.bgp_ipv6_session_state,
+ "bgp_management": fastconnect.bgp_management,
+ "bgp_session_state": fastconnect.bgp_session_state,
+ "compartment_id": fastconnect.compartment_id,
+ "cross_connect_mappings": fastconnect.cross_connect_mappings,
+ "customer_asn": fastconnect.customer_asn,
+ "customer_bgp_asn": fastconnect.customer_bgp_asn,
+ "gateway_id": fastconnect.gateway_id,
+ "ip_mtu": fastconnect.ip_mtu,
+ "is_bfd_enabled": fastconnect.is_bfd_enabled,
+ "lifecycle_state": fastconnect.lifecycle_state,
+ "oracle_bgp_asn": fastconnect.oracle_bgp_asn,
+ "provider_name": fastconnect.provider_name,
+ "provider_service_id": fastconnect.provider_service_id,
+ "provider_service_key_name": fastconnect.provider_service_id,
+ "provider_service_name": fastconnect.provider_service_name,
+ "provider_state": fastconnect.provider_state,
+ "public_prefixes": fastconnect.public_prefixes,
+ "reference_comment": fastconnect.reference_comment,
+ "fastconnect_region": fastconnect.region,
+ "routing_policy": fastconnect.routing_policy,
+ "service_type": fastconnect.service_type,
+ "time_created": fastconnect.time_created.strftime(self.__iso_time_format),
+ "type": fastconnect.type,
+ "freeform_tags": fastconnect.freeform_tags,
+ "define_tags": fastconnect.defined_tags,
+ "region": region_key,
+ "notes": ""
+ }
+ # Adding fastconnect to fastconnect dict
+
+ except Exception as e:
+ record = {
+ "id": fastconnect.id,
+ "display_name": fastconnect.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, fastconnect.display_name),
+ "bandwidth_shape_name": "",
+ "bgp_admin_state": "",
+ "bgp_ipv6_session_state": "",
+ "bgp_management": "",
+ "bgp_session_state": "",
+ "compartment_id": fastconnect.compartment_id,
+ "cross_connect_mappings": "",
+ "customer_asn": "",
+ "customer_bgp_asn": "",
+ "gateway_id": "",
+ "ip_mtu": "",
+ "is_bfd_enabled": "",
+ "lifecycle_state": "",
+ "oracle_bgp_asn": "",
+ "provider_name": "",
+ "provider_service_id": "",
+ "provider_service_key_name": "",
+ "provider_service_name": "",
+ "provider_state": "",
+ "public_prefixes": "",
+ "reference_comment": "",
+ "fastconnect_region": "",
+ "routing_policy": "",
+ "service_type": "",
+ "time_created": "",
+ "type": "",
+ "freeform_tags": "",
+ "define_tags": "",
+ "region": region_key,
+ "notes": str(e)
+ }
+
+ # Adding fastconnect to fastconnect dict
+ try:
+ self.__network_fastconnects[fastconnect.gateway_id].append(record)
+ except Exception:
+ self.__network_fastconnects[fastconnect.gateway_id] = []
+ self.__network_fastconnects[fastconnect.gateway_id].append(record)
+
+ print("\tProcessed " + str(len((list(itertools.chain.from_iterable(self.__network_fastconnects.values()))))) + " FastConnects")
+ return self.__network_fastconnects
+ except Exception as e:
+ raise RuntimeError(
+ "Error in __network_read_fastonnects " + str(e.args))
+
+ ##########################################################################
+ # Load IP Sec Connections
+ ##########################################################################
+ def __network_read_ip_sec_connections(self):
+ try:
+ for region_key, region_values in self.__regions.items():
+ ip_sec_connections_data = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query IPSecConnection resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ for ip_sec in ip_sec_connections_data:
+ try:
+ deep_link = self.__oci_ipsec_uri + ip_sec.identifier + '?region=' + region_key
+ record = {
+ "id": ip_sec.identifier,
+ "display_name": ip_sec.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, ip_sec.display_name),
+ "cpe_id": ip_sec.additional_details['cpeId'],
+ "drg_id": ip_sec.additional_details['drgId'],
+ "compartment_id": ip_sec.compartment_id,
+ # "cpe_local_identifier": ip_sec.cpe_local_identifier,
+ # "cpe_local_identifier_type": ip_sec.cpe_local_identifier_type,
+ "lifecycle_state": ip_sec.lifecycle_state,
+ "freeform_tags": ip_sec.freeform_tags,
+ "define_tags": ip_sec.defined_tags,
+ "region": region_key,
+ "tunnels": [],
+ "number_tunnels_up": 0,
+ "tunnels_up": True, # It is true unless I find out otherwise
+ "notes": ""
+ }
+ # Getting Tunnel Data
+ try:
+ ip_sec_tunnels_data = oci.pagination.list_call_get_all_results(
+ region_values['network_client'].list_ip_sec_connection_tunnels,
+ ipsc_id=ip_sec.identifier,
+ ).data
+ for tunnel in ip_sec_tunnels_data:
+ deep_link = self.__oci_ipsec_uri + ip_sec.identifier + "/tunnels/" + tunnel.id + '?region=' + region_key
+ tunnel_record = {
+ "id": tunnel.id,
+ "cpe_ip": tunnel.cpe_ip,
+ "display_name": tunnel.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, tunnel.display_name),
+ "vpn_ip": tunnel.vpn_ip,
+ "ike_version": tunnel.ike_version,
+ "encryption_domain_config": tunnel.encryption_domain_config,
+ "lifecycle_state": tunnel.lifecycle_state,
+ "nat_translation_enabled": tunnel.nat_translation_enabled,
+ "bgp_session_info": tunnel.bgp_session_info,
+ "oracle_can_initiate": tunnel.oracle_can_initiate,
+ "routing": tunnel.routing,
+ "status": tunnel.status,
+ "compartment_id": tunnel.compartment_id,
+ "dpd_mode": tunnel.dpd_mode,
+ "dpd_timeout_in_sec": tunnel.dpd_timeout_in_sec,
+ "time_created": tunnel.time_created.strftime(self.__iso_time_format),
+ "time_status_updated": str(tunnel.time_status_updated),
+ "notes": ""
+ }
+ if tunnel_record['status'].upper() == "UP":
+ record['number_tunnels_up'] += 1
+ else:
+ record['tunnels_up'] = False
+ record["tunnels"].append(tunnel_record)
+ except Exception:
+ print("\t Unable to tunnels for ip_sec_connection: " + ip_sec.display_name + " id: " + ip_sec.identifier)
+ record['tunnels_up'] = False
+
+ except Exception:
+ record = {
+ "id": ip_sec.identifier,
+ "display_name": ip_sec.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, ip_sec.display_name),
+ "cpe_id": "",
+ "drg_id": "",
+ "compartment_id": ip_sec.compartment_id,
+ "cpe_local_identifier": "",
+ "cpe_local_identifier_type": "",
+ "lifecycle_state": "",
+ "freeform_tags": "",
+ "define_tags": "",
+ "region": region_key,
+ "tunnels": [],
+ "number_tunnels_up": 0,
+ "tunnels_up": False,
+ "notes": ""
+ }
+
+ try:
+ self.__network_ipsec_connections[ip_sec.additional_details['drgId']].append(record)
+ except Exception:
+ self.__network_ipsec_connections[ip_sec.additional_details['drgId']] = []
+ self.__network_ipsec_connections[ip_sec.additional_details['drgId']].append(record)
+
+ print("\tProcessed " + str(len((list(itertools.chain.from_iterable(self.__network_ipsec_connections.values()))))) + " IP SEC Conenctions")
+ return self.__network_ipsec_connections
+ except Exception as e:
+ raise RuntimeError(
+ "Error in __network_read_ip_sec_connections " + str(e.args))
+
+ ############################################
+ # Load Autonomous Databases
+ ############################################
+ def __adb_read_adbs(self):
+ try:
+ for region_key, region_values in self.__regions.items():
+ # UPDATED JB
+ adb_query_resources = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query AutonomousDatabase resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ compartments = set()
+
+ for adb in adb_query_resources:
+ compartments.add(adb.compartment_id)
+
+ for compartment in compartments:
+ autonomous_databases = oci.pagination.list_call_get_all_results(
+ region_values['adb_client'].list_autonomous_databases,
+ compartment_id=compartment
+ ).data
+ for adb in autonomous_databases:
+ try:
+ deep_link = self.__oci_adb_uri + adb.id + '?region=' + region_key
+ # Issue 295 fixed
+ if adb.lifecycle_state not in [ oci.database.models.AutonomousDatabaseSummary.LIFECYCLE_STATE_TERMINATED, oci.database.models.AutonomousDatabaseSummary.LIFECYCLE_STATE_TERMINATING, oci.database.models.AutonomousDatabaseSummary.LIFECYCLE_STATE_UNAVAILABLE ]:
+ record = {
+ "id": adb.id,
+ "display_name": adb.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, adb.display_name),
+ "apex_details": adb.apex_details,
+ "are_primary_whitelisted_ips_used": adb.are_primary_whitelisted_ips_used,
+ "autonomous_container_database_id": adb.autonomous_container_database_id,
+ "autonomous_maintenance_schedule_type": adb.autonomous_maintenance_schedule_type,
+ "available_upgrade_versions": adb.available_upgrade_versions,
+ "backup_config": adb.backup_config,
+ "compartment_id": adb.compartment_id,
+ "connection_strings": adb.connection_strings,
+ "connection_urls": adb.connection_urls,
+ "cpu_core_count": adb.cpu_core_count,
+ "customer_contacts": adb.cpu_core_count,
+ "data_safe_status": adb.data_safe_status,
+ "data_storage_size_in_gbs": adb.data_storage_size_in_gbs,
+ "data_storage_size_in_tbs": adb.data_storage_size_in_tbs,
+ "database_management_status": adb.database_management_status,
+ "dataguard_region_type": adb.dataguard_region_type,
+ "db_name": adb.db_name,
+ "db_version": adb.db_version,
+ "db_workload": adb.db_workload,
+ "defined_tags": adb.defined_tags,
+ "failed_data_recovery_in_seconds": adb.failed_data_recovery_in_seconds,
+ "freeform_tags": adb.freeform_tags,
+ "infrastructure_type": adb.infrastructure_type,
+ "is_access_control_enabled": adb.is_access_control_enabled,
+ "is_auto_scaling_enabled": adb.is_auto_scaling_enabled,
+ "is_data_guard_enabled": adb.is_data_guard_enabled,
+ "is_dedicated": adb.is_dedicated,
+ "is_free_tier": adb.is_free_tier,
+ "is_mtls_connection_required": adb.is_mtls_connection_required,
+ "is_preview": adb.is_preview,
+ "is_reconnect_clone_enabled": adb.is_reconnect_clone_enabled,
+ "is_refreshable_clone": adb.is_refreshable_clone,
+ "key_history_entry": adb.key_history_entry,
+ "key_store_id": adb.key_store_id,
+ "key_store_wallet_name": adb.key_store_wallet_name,
+ "kms_key_id": adb.kms_key_id,
+ "kms_key_lifecycle_details": adb.kms_key_lifecycle_details,
+ "kms_key_version_id": adb.kms_key_version_id,
+ "license_model": adb.license_model,
+ "lifecycle_details": adb.lifecycle_details,
+ "lifecycle_state": adb.lifecycle_state,
+ "nsg_ids": adb.nsg_ids,
+ "ocpu_count": adb.ocpu_count,
+ "open_mode": adb.open_mode,
+ "operations_insights_status": adb.operations_insights_status,
+ "peer_db_ids": adb.peer_db_ids,
+ "permission_level": adb.permission_level,
+ "private_endpoint": adb.private_endpoint,
+ "private_endpoint_ip": adb.private_endpoint_ip,
+ "private_endpoint_label": adb.private_endpoint_label,
+ "refreshable_mode": adb.refreshable_mode,
+ "refreshable_status": adb.refreshable_status,
+ "role": adb.role,
+ "scheduled_operations": adb.scheduled_operations,
+ "service_console_url": adb.service_console_url,
+ "source_id": adb.source_id,
+ "standby_whitelisted_ips": adb.standby_whitelisted_ips,
+ "subnet_id": adb.subnet_id,
+ "supported_regions_to_clone_to": adb.supported_regions_to_clone_to,
+ "system_tags": adb.system_tags,
+ "time_created": adb.time_created.strftime(self.__iso_time_format),
+ "time_data_guard_role_changed": str(adb.time_data_guard_role_changed),
+ "time_deletion_of_free_autonomous_database": str(adb.time_deletion_of_free_autonomous_database),
+ "time_local_data_guard_enabled": str(adb.time_local_data_guard_enabled),
+ "time_maintenance_begin": str(adb.time_maintenance_begin),
+ "time_maintenance_end": str(adb.time_maintenance_end),
+ "time_of_last_failover": str(adb.time_of_last_failover),
+ "time_of_last_refresh": str(adb.time_of_last_refresh),
+ "time_of_last_refresh_point": str(adb.time_of_last_refresh_point),
+ "time_of_last_switchover": str(adb.time_of_last_switchover),
+ "time_of_next_refresh": str(adb.time_of_next_refresh),
+ "time_reclamation_of_free_autonomous_database": str(adb.time_reclamation_of_free_autonomous_database),
+ "time_until_reconnect_clone_enabled": str(adb.time_until_reconnect_clone_enabled),
+ "used_data_storage_size_in_tbs": str(adb.used_data_storage_size_in_tbs),
+ "vault_id": adb.vault_id,
+ "whitelisted_ips": adb.whitelisted_ips,
+ "region": region_key,
+ "notes": ""
+ }
+ else:
+ record = {
+ "id": adb.id,
+ "display_name": adb.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, adb.display_name),
+ "apex_details": "",
+ "are_primary_whitelisted_ips_used": "",
+ "autonomous_container_database_id": "",
+ "autonomous_maintenance_schedule_type": "",
+ "available_upgrade_versions": "",
+ "backup_config": "",
+ "compartment_id": adb.compartment_id,
+ "connection_strings": "",
+ "connection_urls": "",
+ "cpu_core_count": "",
+ "customer_contacts": "",
+ "data_safe_status": "",
+ "data_storage_size_in_gbs": "",
+ "data_storage_size_in_tbs": "",
+ "database_management_status": "",
+ "dataguard_region_type": "",
+ "db_name": "",
+ "db_version": "",
+ "db_workload": "",
+ "defined_tags": "",
+ "failed_data_recovery_in_seconds": "",
+ "freeform_tags": "",
+ "infrastructure_type": "",
+ "is_access_control_enabled": "",
+ "is_auto_scaling_enabled": "",
+ "is_data_guard_enabled": "",
+ "is_dedicated": "",
+ "is_free_tier": "",
+ "is_mtls_connection_required": "",
+ "is_preview": "",
+ "is_reconnect_clone_enabled": "",
+ "is_refreshable_clone": "",
+ "key_history_entry": "",
+ "key_store_id": "",
+ "key_store_wallet_name": "",
+ "kms_key_id": "",
+ "kms_key_lifecycle_details": "",
+ "kms_key_version_id": "",
+ "license_model": "",
+ "lifecycle_details": "",
+ "lifecycle_state": adb.lifecycle_state,
+ "nsg_ids": "",
+ "ocpu_count": "",
+ "open_mode": "",
+ "operations_insights_status": "",
+ "peer_db_ids": "",
+ "permission_level": "",
+ "private_endpoint": "",
+ "private_endpoint_ip": "",
+ "private_endpoint_label": "",
+ "refreshable_mode": "",
+ "refreshable_status": "",
+ "role": "",
+ "scheduled_operations": "",
+ "service_console_url": "",
+ "source_id": "",
+ "standby_whitelisted_ips": "",
+ "subnet_id": "",
+ "supported_regions_to_clone_to": "",
+ "system_tags": "",
+ "time_created": "",
+ "time_data_guard_role_changed": "",
+ "time_deletion_of_free_autonomous_database": "",
+ "time_local_data_guard_enabled": "",
+ "time_maintenance_begin": "",
+ "time_maintenance_end": "",
+ "time_of_last_failover": "",
+ "time_of_last_refresh": "",
+ "time_of_last_refresh_point": "",
+ "time_of_last_switchover": "",
+ "time_of_next_refresh": "",
+ "time_reclamation_of_free_autonomous_database": "",
+ "time_until_reconnect_clone_enabled": "",
+ "used_data_storage_size_in_tbs": "",
+ "vault_id": "",
+ "whitelisted_ips": "",
+ "region": region_key,
+ "notes": ""
+ }
+ except Exception as e:
+ record = {
+ "id": "",
+ "display_name": "",
+ "deep_link": "",
+ "apex_details": "",
+ "are_primary_whitelisted_ips_used": "",
+ "autonomous_container_database_id": "",
+ "autonomous_maintenance_schedule_type": "",
+ "available_upgrade_versions": "",
+ "backup_config": "",
+ "compartment_id": "",
+ "connection_strings": "",
+ "connection_urls": "",
+ "cpu_core_count": "",
+ "customer_contacts": "",
+ "data_safe_status": "",
+ "data_storage_size_in_gbs": "",
+ "data_storage_size_in_tbs": "",
+ "database_management_status": "",
+ "dataguard_region_type": "",
+ "db_name": "",
+ "db_version": "",
+ "db_workload": "",
+ "defined_tags": "",
+ "failed_data_recovery_in_seconds": "",
+ "freeform_tags": "",
+ "infrastructure_type": "",
+ "is_access_control_enabled": "",
+ "is_auto_scaling_enabled": "",
+ "is_data_guard_enabled": "",
+ "is_dedicated": "",
+ "is_free_tier": "",
+ "is_mtls_connection_required": "",
+ "is_preview": "",
+ "is_reconnect_clone_enabled": "",
+ "is_refreshable_clone": "",
+ "key_history_entry": "",
+ "key_store_id": "",
+ "key_store_wallet_name": "",
+ "kms_key_id": "",
+ "kms_key_lifecycle_details": "",
+ "kms_key_version_id": "",
+ "license_model": "",
+ "lifecycle_details": "",
+ "lifecycle_state": "",
+ "nsg_ids": "",
+ "ocpu_count": "",
+ "open_mode": "",
+ "operations_insights_status": "",
+ "peer_db_ids": "",
+ "permission_level": "",
+ "private_endpoint": "",
+ "private_endpoint_ip": "",
+ "private_endpoint_label": "",
+ "refreshable_mode": "",
+ "refreshable_status": "",
+ "role": "",
+ "scheduled_operations": "",
+ "service_console_url": "",
+ "source_id": "",
+ "standby_whitelisted_ips": "",
+ "subnet_id": "",
+ "supported_regions_to_clone_to": "",
+ "system_tags": "",
+ "time_created": "",
+ "time_data_guard_role_changed": "",
+ "time_deletion_of_free_autonomous_database": "",
+ "time_local_data_guard_enabled": "",
+ "time_maintenance_begin": "",
+ "time_maintenance_end": "",
+ "time_of_last_failover": "",
+ "time_of_last_refresh": "",
+ "time_of_last_refresh_point": "",
+ "time_of_last_switchover": "",
+ "time_of_next_refresh": "",
+ "time_reclamation_of_free_autonomous_database": "",
+ "time_until_reconnect_clone_enabled": "",
+ "used_data_storage_size_in_tbs": "",
+ "vault_id": "",
+ "whitelisted_ips": "",
+ "region": region_key,
+ "notes": str(e)
+ }
+ self.__autonomous_databases.append(record)
+
+ print("\tProcessed " + str(len(self.__autonomous_databases)) + " Autonomous Databases")
+ return self.__autonomous_databases
+ except Exception as e:
+ raise RuntimeError("Error in __adb_read_adbs " + str(e.args))
+
+ ############################################
+ # Load Oracle Integration Cloud
+ ############################################
+ def __oic_read_oics(self):
+ try:
+ for region_key, region_values in self.__regions.items():
+ oic_resources = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query IntegrationInstance resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ compartments = set()
+
+ for oic_resource in oic_resources:
+ compartments.add(oic_resource.compartment_id)
+
+ for compartment in compartments:
+ oic_instances = oci.pagination.list_call_get_all_results(
+ region_values['oic_client'].list_integration_instances,
+ compartment_id=compartment
+ ).data
+ for oic_instance in oic_instances:
+ if oic_instance.lifecycle_state == 'ACTIVE' or oic_instance.LIFECYCLE_STATE_INACTIVE == "INACTIVE":
+ deep_link = self.__oci_oicinstance_uri + oic_instance.id + '?region=' + region_key
+ try:
+ record = {
+ "id": oic_instance.id,
+ "display_name": oic_instance.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, oic_instance.display_name),
+ "network_endpoint_details": oic_instance.network_endpoint_details,
+ "compartment_id": oic_instance.compartment_id,
+ "alternate_custom_endpoints": oic_instance.alternate_custom_endpoints,
+ "consumption_model": oic_instance.consumption_model,
+ "custom_endpoint": oic_instance.custom_endpoint,
+ "instance_url": oic_instance.instance_url,
+ "integration_instance_type": oic_instance.integration_instance_type,
+ "is_byol": oic_instance.is_byol,
+ "is_file_server_enabled": oic_instance.is_file_server_enabled,
+ "is_visual_builder_enabled": oic_instance.is_visual_builder_enabled,
+ "lifecycle_state": oic_instance.lifecycle_state,
+ "message_packs": oic_instance.message_packs,
+ "state_message": oic_instance.state_message,
+ "time_created": oic_instance.time_created.strftime(self.__iso_time_format),
+ "time_updated": str(oic_instance.time_updated),
+ "region": region_key,
+ "notes": ""
+ }
+ except Exception as e:
+ record = {
+ "id": oic_instance.id,
+ "display_name": oic_instance.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, oic_instance.display_name),
+ "network_endpoint_details": "",
+ "compartment_id": "",
+ "alternate_custom_endpoints": "",
+ "consumption_model": "",
+ "custom_endpoint": "",
+ "instance_url": "",
+ "integration_instance_type": "",
+ "is_byol": "",
+ "is_file_server_enabled": "",
+ "is_visual_builder_enabled": "",
+ "lifecycle_state": "",
+ "message_packs": "",
+ "state_message": "",
+ "time_created": "",
+ "time_updated": "",
+ "region": region_key,
+ "notes": str(e)
+ }
+ self.__integration_instances.append(record)
+ print("\tProcessed " + str(len(self.__integration_instances)) + " Integration Instance")
+ return self.__integration_instances
+ except Exception as e:
+ raise RuntimeError("Error in __oic_read_oics " + str(e.args))
+
+ ############################################
+ # Load Oracle Analytics Cloud
+ ############################################
+ def __oac_read_oacs(self):
+ try:
+ for region_key, region_values in self.__regions.items():
+ oac_resources = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query AnalyticsInstance resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ compartments = set()
+
+ for resource in oac_resources:
+ compartments.add(resource.compartment_id)
+
+ for compartment in compartments:
+ oac_instances = oci.pagination.list_call_get_all_results(
+ region_values['oac_client'].list_analytics_instances,
+ compartment_id=compartment
+ ).data
+ for oac_instance in oac_instances:
+ deep_link = self.__oci_oacinstance_uri + oac_instance.id + '?region=' + region_key
+ try:
+ record = {
+ "id": oac_instance.id,
+ "name": oac_instance.name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, oac_instance.name),
+ "description": oac_instance.description,
+ "network_endpoint_details": oac_instance.network_endpoint_details,
+ "network_endpoint_type": oac_instance.network_endpoint_details.network_endpoint_type,
+ "compartment_id": oac_instance.compartment_id,
+ "lifecycle_state": oac_instance.lifecycle_state,
+ "email_notification": oac_instance.email_notification,
+ "feature_set": oac_instance.feature_set,
+ "service_url": oac_instance.service_url,
+ "capacity": oac_instance.capacity,
+ "license_type": oac_instance.license_type,
+ "time_created": oac_instance.time_created.strftime(self.__iso_time_format),
+ "time_updated": str(oac_instance.time_updated),
+ "region": region_key,
+ "notes": ""
+ }
+ except Exception as e:
+ record = {
+ "id": oac_instance.id,
+ "name": oac_instance.name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, oac_instance.name),
+ "network_endpoint_details": "",
+ "compartment_id": "",
+ "lifecycle_state": "",
+ "email_notification": "",
+ "feature_set": "",
+ "service_url": "",
+ "capacity": "",
+ "license_type": "",
+ "time_created": "",
+ "time_updated": "",
+ "region": region_key,
+ "notes": str(e)
+ }
+ self.__analytics_instances.append(record)
+
+ print("\tProcessed " + str(len(self.__analytics_instances)) + " Analytics Instances")
+ return self.__analytics_instances
+ except Exception as e:
+ raise RuntimeError("Error in __oac_read_oacs " + str(e.args))
+
+ ##########################################################################
+ # Events
+ ##########################################################################
+ def __events_read_event_rules(self):
+
+ try:
+ for region_key, region_values in self.__regions.items():
+ events_rules_data = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query EventRule resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ for event_rule in events_rules_data:
+ deep_link = self.__oci_events_uri + event_rule.identifier + '?region=' + region_key
+ record = {
+ "compartment_id": event_rule.compartment_id,
+ "condition": event_rule.additional_details['condition'],
+ "description": event_rule.additional_details['description'],
+ "display_name": event_rule.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, event_rule.display_name),
+ "id": event_rule.identifier,
+ # "is_enabled": event_rule.is_enabled,
+ "lifecycle_state": event_rule.lifecycle_state,
+ "time_created": event_rule.time_created.strftime(self.__iso_time_format),
+ "region": region_key
+ }
+ self.__event_rules.append(record)
+
+ print("\tProcessed " + str(len(self.__event_rules)) + " Event Rules")
+ return self.__event_rules
+ except Exception as e:
+ raise RuntimeError("Error in events_read_rules " + str(e.args))
+
+ ##########################################################################
+ # Logging - Log Groups and Logs
+ ##########################################################################
+ def __logging_read_log_groups_and_logs(self):
+
+ try:
+ for region_key, region_values in self.__regions.items():
+ log_groups = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query LogGroup resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ # Looping through log groups to get logs
+ for log_group in log_groups:
+ deep_link = self.__oci_loggroup_uri + log_group.identifier + '?region=' + region_key
+ record = {
+ "compartment_id": log_group.compartment_id,
+ "description": log_group.additional_details['description'],
+ "display_name": log_group.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, log_group.display_name),
+ "id": log_group.identifier,
+ "time_created": log_group.time_created.strftime(self.__iso_time_format),
+ # "time_last_modified": str(log_group.time_last_modified),
+ "defined_tags": log_group.defined_tags,
+ "freeform_tags": log_group.freeform_tags,
+ "region": region_key,
+ "logs": [],
+ "notes" : ""
+ }
+
+ try:
+ logs = oci.pagination.list_call_get_all_results(
+ region_values['logging_client'].list_logs,
+ log_group_id=log_group.identifier
+ ).data
+ for log in logs:
+ deep_link = self.__oci_loggroup_uri + log_group.identifier + "/logs/" + log.id + '?region=' + region_key
+ log_record = {
+ "compartment_id": log.compartment_id,
+ "display_name": log.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, log.display_name),
+ "id": log.id,
+ "is_enabled": log.is_enabled,
+ "lifecycle_state": log.lifecycle_state,
+ "log_group_id": log.log_group_id,
+ "log_type": log.log_type,
+ "retention_duration": log.retention_duration,
+ "time_created": log.time_created.strftime(self.__iso_time_format),
+ "time_last_modified": str(log.time_last_modified),
+ "defined_tags": log.defined_tags,
+ "freeform_tags": log.freeform_tags
+ }
+ try:
+ if log.configuration:
+ log_record["configuration_compartment_id"] = log.configuration.compartment_id,
+ log_record["source_category"] = log.configuration.source.category,
+ log_record["source_parameters"] = log.configuration.source.parameters,
+ log_record["source_resource"] = log.configuration.source.resource,
+ log_record["source_service"] = log.configuration.source.service,
+ log_record["source_source_type"] = log.configuration.source.source_type
+ log_record["archiving_enabled"] = log.configuration.archiving.is_enabled
+
+ if log.configuration.source.service == 'flowlogs':
+ self.__subnet_logs[log.configuration.source.resource] = {"log_group_id": log.log_group_id, "log_id": log.id}
+
+ elif log.configuration.source.service == 'objectstorage' and 'write' in log.configuration.source.category:
+ # Only write logs
+ self.__write_bucket_logs[log.configuration.source.resource] = {"log_group_id": log.log_group_id, "log_id": log.id, "region": region_key}
+
+ elif log.configuration.source.service == 'objectstorage' and 'read' in log.configuration.source.category:
+ # Only read logs
+ self.__read_bucket_logs[log.configuration.source.resource] = {"log_group_id": log.log_group_id, "log_id": log.id, "region": region_key}
+
+ elif log.configuration.source.service == 'loadbalancer' and 'error' in log.configuration.source.category:
+ self.__load_balancer_error_logs.append(
+ log.configuration.source.resource)
+ elif log.configuration.source.service == 'loadbalancer' and 'access' in log.configuration.source.category:
+ self.__load_balancer_access_logs.append(
+ log.configuration.source.resource)
+ elif log.configuration.source.service == 'apigateway' and 'access' in log.configuration.source.category:
+ self.__api_gateway_access_logs.append(
+ log.configuration.source.resource)
+ elif log.configuration.source.service == 'apigateway' and 'error' in log.configuration.source.category:
+ self.__api_gateway_error_logs.append(
+ log.configuration.source.resource)
+ except Exception as e:
+ self.__errors.append({"id" : log.id, "error" : str(e)})
+ # Append Log to log List
+ record['logs'].append(log_record)
+ except Exception as e:
+ self.__errors.append({"id" : log_group.identifier, "error" : str(e) })
+ record['notes'] = str(e)
+
+
+ self.__logging_list.append(record)
+
+ print("\tProcessed " + str(len(self.__logging_list)) + " Log Group Logs")
+ return self.__logging_list
+ except Exception as e:
+ raise RuntimeError(
+ "Error in __logging_read_log_groups_and_logs " + str(e.args))
+
+ ##########################################################################
+ # Vault Keys
+ ##########################################################################
+ def __vault_read_vaults(self):
+ self.__vaults = []
+ try:
+ for region_key, region_values in self.__regions.items():
+ keys_data = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query Key resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ vaults_data = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query Vault resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ # Get all Vaults in a compartment
+ for vlt in vaults_data:
+ deep_link = self.__oci_vault_uri + vlt.identifier + '?region=' + region_key
+ vault_record = {
+ "compartment_id": vlt.compartment_id,
+ # "crypto_endpoint": vlt.crypto_endpoint,
+ "display_name": vlt.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, vlt.display_name),
+ "id": vlt.identifier,
+ "lifecycle_state": vlt.lifecycle_state,
+ # "management_endpoint": vlt.management_endpoint,
+ "time_created": vlt.time_created.strftime(self.__iso_time_format),
+ "vault_type": vlt.additional_details['vaultType'],
+ "freeform_tags": vlt.freeform_tags,
+ "defined_tags": vlt.defined_tags,
+ "region": region_key,
+ "keys": []
+ }
+ for key in keys_data:
+ if vlt.identifier == key.additional_details['vaultId']:
+ deep_link = self.__oci_vault_uri + vlt.identifier + "/vaults/" + key.identifier + '?region=' + region_key
+ key_record = {
+ "id": key.identifier,
+ "display_name": key.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, key.display_name),
+ "compartment_id": key.compartment_id,
+ "lifecycle_state": key.lifecycle_state,
+ "time_created": key.time_created.strftime(self.__iso_time_format),
+ }
+ vault_record['keys'].append(key_record)
+
+ self.__vaults.append(vault_record)
+
+ print("\tProcessed " + str(len(self.__vaults)) + " Vaults")
+ return self.__vaults
+ except Exception as e:
+ raise RuntimeError(
+ "Error in __vault_read_vaults " + str(e.args))
+
+ ##########################################################################
+ # OCI Budgets
+ ##########################################################################
+ def __budget_read_budgets(self):
+ try:
+ # Getting all budgets in tenancy of any type
+ budgets_data = oci.pagination.list_call_get_all_results(
+ self.__regions[self.__home_region]['budget_client'].list_budgets,
+ compartment_id=self.__tenancy.id,
+ target_type="ALL"
+ ).data
+ # Looping through Budgets to to get records
+ for budget in budgets_data:
+ try:
+ alerts_data = oci.pagination.list_call_get_all_results(
+ self.__regions[self.__home_region]['budget_client'].list_alert_rules,
+ budget_id=budget.id,
+ ).data
+ except Exception:
+ print("\tFailed to get Budget Data for Budget Name: " + budget.display_name + " id: " + budget.id)
+ alerts_data = []
+
+ deep_link = self.__oci_budget_uri + budget.id
+ record = {
+ "actual_spend": budget.actual_spend,
+ "alert_rule_count": budget.alert_rule_count,
+ "amount": budget.amount,
+ "budget_processing_period_start_offset": budget.budget_processing_period_start_offset,
+ "compartment_id": budget.compartment_id,
+ "description": budget.description,
+ "display_name": budget.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, budget.display_name),
+ "id": budget.id,
+ "lifecycle_state": budget.lifecycle_state,
+ "processing_period_type": budget.processing_period_type,
+ "reset_period": budget.reset_period,
+ "target_compartment_id": budget.target_compartment_id,
+ "target_type": budget.target_type,
+ "tagerts": budget.targets,
+ "time_created": budget.time_created.strftime(self.__iso_time_format),
+ "time_spend_computed": str(budget.time_spend_computed),
+ "alerts": []
+ }
+
+ for alert in alerts_data:
+ record['alerts'].append(alert)
+
+ # Append Budget to list of Budgets
+ self.__budgets.append(record)
+
+ print("\tProcessed " + str(len(self.__budgets)) + " Budgets")
+ return self.__budgets
+ except Exception as e:
+ raise RuntimeError(
+ "Error in __budget_read_budgets " + str(e.args))
+
+ ##########################################################################
+ # Audit Configuration
+ ##########################################################################
+ def __audit_read_tenancy_audit_configuration(self):
+ # Pulling the Audit Configuration
+ try:
+ self.__audit_retention_period = self.__regions[self.__home_region]['audit_client'].get_configuration(
+ self.__tenancy.id).data.retention_period_days
+ except Exception as e:
+ if "NotAuthorizedOrNotFound" in str(e):
+ self.__audit_retention_period = -1
+ print("\t*** Access to audit retention requires the user to be part of the Administrator group ***")
+ self.__errors.append({"id" : self.__tenancy.id, "error" : "*** Access to audit retention requires the user to be part of the Administrator group ***"})
+ else:
+ raise RuntimeError("Error in __audit_read_tenancy_audit_configuration " + str(e.args))
+
+ print("\tProcessed Audit Configuration.")
+ return self.__audit_retention_period
+
+ ##########################################################################
+ # Cloud Guard Configuration
+ ##########################################################################
+ def __cloud_guard_read_cloud_guard_configuration(self):
+ try:
+ self.__cloud_guard_config = self.__regions[self.__home_region]['cloud_guard_client'].get_configuration(
+ self.__tenancy.id).data
+ debug("__cloud_guard_read_cloud_guard_configuration Cloud Guard Configuration is: " + str(self.__cloud_guard_config))
+ self.__cloud_guard_config_status = self.__cloud_guard_config.status
+
+ print("\tProcessed Cloud Guard Configuration.")
+ return self.__cloud_guard_config_status
+
+ except Exception:
+ self.__cloud_guard_config_status = 'DISABLED'
+ print("*** Cloud Guard service requires a PayGo account ***")
+
+ ##########################################################################
+ # Cloud Guard Configuration
+ ##########################################################################
+ def __cloud_guard_read_cloud_guard_targets(self):
+ if self.__cloud_guard_config_status == "ENABLED":
+ cloud_guard_targets = 0
+ try:
+ for compartment in self.__compartments:
+ if self.__if_not_managed_paas_compartment(compartment.name):
+ # Getting a compartments target
+ cg_targets = self.__regions[self.__cloud_guard_config.reporting_region]['cloud_guard_client'].list_targets(
+ compartment_id=compartment.id).data.items
+ debug("__cloud_guard_read_cloud_guard_targets: " + str(cg_targets) )
+ # Looping throufh targets to get target data
+ for target in cg_targets:
+ try:
+ # Getting Target data like recipes
+ try:
+ target_data = self.__regions[self.__cloud_guard_config.reporting_region]['cloud_guard_client'].get_target(
+ target_id=target.id
+ ).data
+
+ except Exception:
+ target_data = None
+ deep_link = self.__oci_cgtarget_uri + target.id
+ record = {
+ "compartment_id": target.compartment_id,
+ "defined_tags": target.defined_tags,
+ "display_name": target.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, target.display_name),
+ "freeform_tags": target.freeform_tags,
+ "id": target.id,
+ "lifecycle_state": target.lifecycle_state,
+ "lifecyle_details": target.lifecyle_details,
+ "system_tags": target.system_tags,
+ "recipe_count": target.recipe_count,
+ "target_resource_id": target.target_resource_id,
+ "target_resource_type": target.target_resource_type,
+ "time_created": target.time_created.strftime(self.__iso_time_format),
+ "time_updated": str(target.time_updated),
+ "inherited_by_compartments": target_data.inherited_by_compartments if target_data else "",
+ "description": target_data.description if target_data else "",
+ "target_details": target_data.target_details if target_data else "",
+ "target_detector_recipes": target_data.target_detector_recipes if target_data else "",
+ "target_responder_recipes": target_data.target_responder_recipes if target_data else ""
+ }
+ # Indexing by compartment_id
+
+ self.__cloud_guard_targets[compartment.id] = record
+
+ cloud_guard_targets += 1
+
+ except Exception:
+ print("\t Failed to Cloud Guard Target Data for: " + target.display_name + " id: " + target.id)
+ self.__errors.append({"id" : target.id, "error" : "Failed to Cloud Guard Target Data for: " + target.display_name + " id: " + target.id })
+
+ print("\tProcessed " + str(cloud_guard_targets) + " Cloud Guard Targets")
+ return self.__cloud_guard_targets
+
+ except Exception as e:
+ print("*** Cloud Guard service requires a PayGo account ***")
+ self.__errors.append({"id" : self.__tenancy.id, "error" : "Cloud Guard service requires a PayGo account. Error is: " + str(e)})
+
+ ##########################################################################
+ # Identity Password Policy
+ ##########################################################################
+ def __identity_read_tenancy_password_policy(self):
+ try:
+ self.__tenancy_password_policy = self.__regions[self.__home_region]['identity_client'].get_authentication_policy(
+ self.__tenancy.id
+ ).data
+
+ print("\tProcessed Tenancy Password Policy...")
+ return self.__tenancy_password_policy
+ except Exception as e:
+ if "NotAuthorizedOrNotFound" in str(e):
+ self.__tenancy_password_policy = None
+ print("\t*** Access to password policies in this tenancy requires elevated permissions. ***")
+ self.__errors.append({"id" : self.__tenancy.id, "error" : "*** Access to password policies in this tenancy requires elevated permissions. ***"})
+ else:
+ raise RuntimeError("Error in __identity_read_tenancy_password_policy " + str(e.args))
+
+ ##########################################################################
+ # Oracle Notifications Services for Subscriptions
+ ##########################################################################
+ def __ons_read_subscriptions(self):
+ try:
+ for region_key, region_values in self.__regions.items():
+ # Iterate through compartments to get all subscriptions
+ subs_data = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query OnsSubscription resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ for sub in subs_data:
+ deep_link = self.__oci_onssub_uri + sub.identifier + '?region=' + region_key
+ record = {
+ "id": sub.identifier,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, sub.identifier),
+ "compartment_id": sub.compartment_id,
+ # "created_time": sub.created_time, # this is an INT
+ "created_time": sub.time_created,
+ "endpoint": sub.additional_details['endpoint'],
+ "protocol": sub.additional_details['protocol'],
+ "topic_id": sub.additional_details['topicId'],
+ "lifecycle_state": sub.lifecycle_state,
+ "defined_tags": sub.defined_tags,
+ "freeform_tags": sub.freeform_tags,
+ "region": region_key
+
+ }
+ self.__subscriptions.append(record)
+
+ print("\tProcessed " + str(len(self.__subscriptions)) + " Subscriptions")
+ return self.__subscriptions
+
+ except Exception as e:
+ raise RuntimeError("Error in ons_read_subscription " + str(e.args))
+
+ ##########################################################################
+ # Identity Tag Default
+ ##########################################################################
+ def __identity_read_tag_defaults(self):
+ try:
+ # Getting Tag Default for the Root Compartment - Only
+ tag_defaults = oci.pagination.list_call_get_all_results(
+ self.__regions[self.__home_region]['identity_client'].list_tag_defaults,
+ compartment_id=self.__tenancy.id
+ ).data
+ for tag in tag_defaults:
+ deep_link = self.__oci_compartment_uri + tag.compartment_id + "/tag-defaults"
+ record = {
+ "id": tag.id,
+ "compartment_id": tag.compartment_id,
+ "value": tag.value,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, tag.value),
+ "time_created": tag.time_created.strftime(self.__iso_time_format),
+ "tag_definition_id": tag.tag_definition_id,
+ "tag_definition_name": tag.tag_definition_name,
+ "tag_namespace_id": tag.tag_namespace_id,
+ "lifecycle_state": tag.lifecycle_state
+
+ }
+ self.__tag_defaults.append(record)
+
+ print("\tProcessed " + str(len(self.__tag_defaults)) + " Tag Defaults")
+ return self.__tag_defaults
+
+ except Exception as e:
+ raise RuntimeError(
+ "Error in __identity_read_tag_defaults " + str(e.args))
+
+ ##########################################################################
+ # Get Service Connectors
+ ##########################################################################
+ def __sch_read_service_connectors(self):
+
+ try:
+ # looping through regions
+ for region_key, region_values in self.__regions.items():
+ # Collecting Service Connectors from each compartment
+ service_connectors_data = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=oci.resource_search.models.StructuredSearchDetails(
+ query="query ServiceConnector resources return allAdditionalFields where compartmentId != '" + self.__managed_paas_compartment_id + "'")
+ ).data
+
+ # Getting Bucket Info
+ for connector in service_connectors_data:
+ deep_link = self.__oci_serviceconnector_uri + connector.identifier + "/logging" + '?region=' + region_key
+ try:
+ service_connector = region_values['sch_client'].get_service_connector(
+ service_connector_id=connector.identifier
+ ).data
+ record = {
+ "id": service_connector.id,
+ "display_name": service_connector.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, service_connector.display_name),
+ "description": service_connector.description,
+ "freeform_tags": service_connector.freeform_tags,
+ "defined_tags": service_connector.defined_tags,
+ "lifecycle_state": service_connector.lifecycle_state,
+ # "lifecycle_details": service_connector.lifecyle_details,
+ "system_tags": service_connector.system_tags,
+ "time_created": service_connector.time_created.strftime(self.__iso_time_format),
+ # "time_updated": str(service_connector.time_updated),
+ "target_kind": service_connector.target.kind,
+ "log_sources": [],
+ "region": region_key,
+ "notes": ""
+ }
+ for log_source in service_connector.source.log_sources:
+ record['log_sources'].append({
+ 'compartment_id': log_source.compartment_id,
+ 'log_group_id': log_source.log_group_id,
+ 'log_id': log_source.log_id
+ })
+ self.__service_connectors[service_connector.id] = record
+ except Exception as e:
+ record = {
+ "id": connector.identifier,
+ "display_name": connector.display_name,
+ "deep_link": self.__generate_csv_hyperlink(deep_link, connector.display_name),
+ "description": connector.additional_details['description'],
+ "freeform_tags": connector.freeform_tags,
+ "defined_tags": connector.defined_tags,
+ "lifecycle_state": connector.lifecycle_state,
+ # "lifecycle_details": connector.lifecycle_details,
+ "system_tags": "",
+ "time_created": connector.time_created.strftime(self.__iso_time_format),
+ # "time_updated": str(connector.time_updated),
+ "target_kind": "",
+ "log_sources": [],
+ "region": region_key,
+ "notes": str(e)
+ }
+ self.__service_connectors[connector.identifier] = record
+ # Returning Service Connectors
+ print("\tProcessed " + str(len(self.__service_connectors)) + " Service Connectors")
+ return self.__service_connectors
+ except Exception as e:
+ raise RuntimeError("Error in __sch_read_service_connectors " + str(e.args))
+
+ ##########################################################################
+ # Resources in root compartment
+ ##########################################################################
+ def __search_resources_in_root_compartment(self):
+
+ # query = []
+ # resources_in_root_data = []
+ # record = []
+ query_non_compliant = "query VCN, instance, volume, filesystem, bucket, autonomousdatabase, database, dbsystem resources where compartmentId = '" + self.__tenancy.id + "'"
+ query_all_resources = "query all resources where compartmentId = '" + self.__tenancy.id + "'"
+ # resources_in_root_data = self.__search_run_structured_query(query)
+
+ for region_key, region_values in self.__regions.items():
+ try:
+ # Searching for non compliant resources in root compartment
+ structured_search_query = oci.resource_search.models.StructuredSearchDetails(query=query_non_compliant)
+ search_results = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=structured_search_query
+ ).data
+
+ for item in search_results:
+ record = {
+ "display_name": item.display_name,
+ "id": item.identifier,
+ "region": region_key
+ }
+ self.__resources_in_root_compartment.append(record)
+
+ # Searching for all resources in the root compartment
+ structured_search_all_query = oci.resource_search.models.StructuredSearchDetails(query=query_all_resources)
+ structured_search_all_resources = oci.pagination.list_call_get_all_results(
+ region_values['search_client'].search_resources,
+ search_details=structured_search_all_query
+ ).data
+
+ for item in structured_search_all_resources:
+ # ignoring global resources like IAM
+ if item.identifier.split('.')[3]:
+ record = {
+ "display_name": item.display_name,
+ "id": item.identifier,
+ "region": region_key
+ }
+ self.cis_foundations_benchmark_1_2['5.2']['Total'].append(item)
+
+ except Exception as e:
+ raise RuntimeError(
+ "Error in __search_resources_in_root_compartment " + str(e.args))
+
+ print("\tProcessed " + str(len(self.__resources_in_root_compartment)) + " resources in the root compartment")
+ return self.__resources_in_root_compartment
+
+ ##########################################################################
+ # Analyzes Tenancy Data for CIS Report
+ ##########################################################################
+ def __report_cis_analyze_tenancy_data(self):
+
+ # 1.1 Check - Checking for policy statements that are not restricted to a service
+
+ for policy in self.__policies:
+ for statement in policy['statements']:
+ if "allow group".upper() in statement.upper() \
+ and ("to manage all-resources".upper() in statement.upper()) \
+ and policy['name'].upper() != "Tenant Admin Policy".upper():
+ # If there are more than manage all-resources in you don't meet this rule
+ self.cis_foundations_benchmark_1_2['1.1']['Status'] = False
+ self.cis_foundations_benchmark_1_2['1.1']['Findings'].append(policy)
+ break
+
+ # 1.2 Check
+ for policy in self.__policies:
+ for statement in policy['statements']:
+ if "allow group".upper() in statement.upper() \
+ and "to manage all-resources in tenancy".upper() in statement.upper() \
+ and policy['name'].upper() != "Tenant Admin Policy".upper():
+
+ self.cis_foundations_benchmark_1_2['1.2']['Status'] = False
+ self.cis_foundations_benchmark_1_2['1.2']['Findings'].append(
+ policy)
+
+ # 1.3 Check - May want to add a service check
+ for policy in self.__policies:
+ if policy['name'].upper() != "Tenant Admin Policy".upper() and policy['name'].upper() != "PSM-root-policy":
+ for statement in policy['statements']:
+ if ("allow group".upper() in statement.upper() and "tenancy".upper() in statement.upper() and ("to manage ".upper() in statement.upper() or "to use".upper() in statement.upper()) and ("all-resources".upper() in statement.upper() or (" groups ".upper() in statement.upper() and " users ".upper() in statement.upper()))):
+ split_statement = statement.split("where")
+ # Checking if there is a where clause
+ if len(split_statement) == 2:
+ # If there is a where clause remove whitespace and quotes
+ clean_where_clause = split_statement[1].upper().replace(" ", "").replace("'", "")
+ if all(permission.upper() in clean_where_clause for permission in self.cis_iam_checks['1.3']["targets"]):
+ pass
+ else:
+ self.cis_foundations_benchmark_1_2['1.3']['Findings'].append(policy)
+ self.cis_foundations_benchmark_1_2['1.3']['Status'] = False
+
+ else:
+ self.cis_foundations_benchmark_1_2['1.3']['Findings'].append(policy)
+ self.cis_foundations_benchmark_1_2['1.3']['Status'] = False
+
+ # CIS Total 1.1,1,2,1.3 Adding - All IAM Policies for to CIS Total
+ self.cis_foundations_benchmark_1_2['1.1']['Total'] = self.__policies
+ self.cis_foundations_benchmark_1_2['1.2']['Total'] = self.__policies
+ self.cis_foundations_benchmark_1_2['1.3']['Total'] = self.__policies
+
+ # 1.4 Check - Password Policy - Only in home region
+ if self.__tenancy_password_policy:
+ if self.__tenancy_password_policy.password_policy.is_lowercase_characters_required:
+ self.cis_foundations_benchmark_1_2['1.4']['Status'] = True
+ else:
+ self.cis_foundations_benchmark_1_2['1.4']['Status'] = None
+
+ # 1.5 and 1.6 Checking Identity Domains Password Policy for expiry less than 365 and
+ debug("__report_cis_analyze_tenancy_data: Identity Domains Enabled is: " + str(self.__identity_domains_enabled))
+ if self.__identity_domains_enabled:
+ for domain in self.__identity_domains:
+ if domain['password_policy']:
+ debug("Policy " + domain['display_name'] + " password expiry is " + str(domain['password_policy']['password_expires_after']))
+ debug("Policy " + domain['display_name'] + " reuse is " + str(domain['password_policy']['num_passwords_in_history']))
+
+ if domain['password_policy']['password_expires_after']:
+ if domain['password_policy']['password_expires_after'] > 365:
+ self.cis_foundations_benchmark_1_2['1.5']['Findings'].append(domain)
+
+
+ if domain['password_policy']['num_passwords_in_history']:
+ if domain['password_policy']['num_passwords_in_history'] < 24:
+ self.cis_foundations_benchmark_1_2['1.6']['Findings'].append(domain)
+
+ else:
+ debug("__report_cis_analyze_tenancy_data 1.5 and 1.6 no password policy")
+ self.cis_foundations_benchmark_1_2['1.5']['Findings'].append(domain)
+ self.cis_foundations_benchmark_1_2['1.6']['Findings'].append(domain)
+
+
+ if self.cis_foundations_benchmark_1_2['1.5']['Findings']:
+ self.cis_foundations_benchmark_1_2['1.5']['Status'] = False
+ else:
+ self.cis_foundations_benchmark_1_2['1.5']['Status'] = True
+
+ if self.cis_foundations_benchmark_1_2['1.6']['Findings']:
+ self.cis_foundations_benchmark_1_2['1.6']['Status'] = False
+ else:
+ self.cis_foundations_benchmark_1_2['1.6']['Status'] = True
+
+ # 1.7 Check - Local Users w/o MFA
+ for user in self.__users:
+ if user['identity_provider_id'] is None and user['can_use_console_password'] and not (user['is_mfa_activated']) and user['lifecycle_state'] == 'ACTIVE':
+ self.cis_foundations_benchmark_1_2['1.7']['Status'] = False
+ self.cis_foundations_benchmark_1_2['1.7']['Findings'].append(
+ user)
+
+ # CIS Total 1.7 Adding - All Users to CIS Total
+ self.cis_foundations_benchmark_1_2['1.7']['Total'] = self.__users
+
+ # 1.8 Check - API Keys over 90
+ for user in self.__users:
+ if user['api_keys']:
+ for key in user['api_keys']:
+ if self.api_key_time_max_datetime >= datetime.datetime.strptime(key['time_created'], self.__iso_time_format) and key['lifecycle_state'] == 'ACTIVE':
+ self.cis_foundations_benchmark_1_2['1.8']['Status'] = False
+ finding = {
+ "user_name": user['name'],
+ "user_id": user['id'],
+ "key_id": key['id'],
+ 'fingerprint': key['fingerprint'],
+ 'inactive_status': key['inactive_status'],
+ 'lifecycle_state': key['lifecycle_state'],
+ 'time_created': key['time_created']
+ }
+
+ self.cis_foundations_benchmark_1_2['1.8']['Findings'].append(
+ finding)
+
+ # CIS Total 1.8 Adding - Customer Secrets to CIS Total
+ self.cis_foundations_benchmark_1_2['1.8']['Total'].append(key)
+
+ # CIS 1.9 Check - Old Customer Secrets
+ for user in self.__users:
+ if user['customer_secret_keys']:
+ for key in user['customer_secret_keys']:
+ if self.api_key_time_max_datetime >= datetime.datetime.strptime(key['time_created'], self.__iso_time_format) and key['lifecycle_state'] == 'ACTIVE':
+ self.cis_foundations_benchmark_1_2['1.9']['Status'] = False
+
+ finding = {
+ "user_name": user['name'],
+ "user_id": user['id'],
+ "id": key['id'],
+ 'display_name': key['display_name'],
+ 'inactive_status': key['inactive_status'],
+ 'lifecycle_state': key['lifecycle_state'],
+ 'time_created': key['time_created'],
+ 'time_expires': key['time_expires'],
+ }
+
+ self.cis_foundations_benchmark_1_2['1.9']['Findings'].append(
+ finding)
+
+ # CIS Total 1.9 Adding - Customer Secrets to CIS Total
+ self.cis_foundations_benchmark_1_2['1.9']['Total'].append(key)
+
+ # CIS 1.10 Check - Old Auth Tokens
+ for user in self.__users:
+ if user['auth_tokens']:
+ for key in user['auth_tokens']:
+ if self.api_key_time_max_datetime >= datetime.datetime.strptime(key['time_created'], self.__iso_time_format) and key['lifecycle_state'] == 'ACTIVE':
+ self.cis_foundations_benchmark_1_2['1.10']['Status'] = False
+
+ finding = {
+ "user_name": user['name'],
+ "user_id": user['id'],
+ "id": key['id'],
+ "description": key['description'],
+ "inactive_status": key['inactive_status'],
+ "lifecycle_state": key['lifecycle_state'],
+ "time_created": key['time_created'],
+ "time_expires": key['time_expires'],
+ "token": key['token']
+ }
+
+ self.cis_foundations_benchmark_1_2['1.10']['Findings'].append(
+ finding)
+
+ # CIS Total 1.10 Adding - Keys to CIS Total
+ self.cis_foundations_benchmark_1_2['1.10']['Total'].append(
+ key)
+
+ # CIS 1.11 Active Admins with API keys
+ # Iterating through all users to see if they have API Keys and if they are active users
+ for user in self.__users:
+ if 'Administrators' in user['groups'] and user['api_keys'] and user['lifecycle_state'] == 'ACTIVE':
+ self.cis_foundations_benchmark_1_2['1.11']['Status'] = False
+ self.cis_foundations_benchmark_1_2['1.11']['Findings'].append(
+ user)
+
+ # CIS Total 1.12 Adding - All IAM Users in Administrator group to CIS Total
+ if 'Administrators' in user['groups'] and user['lifecycle_state'] == 'ACTIVE':
+ self.cis_foundations_benchmark_1_2['1.11']['Total'].append(user)
+
+ # CIS 1.12 Check - This check is complete uses email verification
+ # Iterating through all users to see if they have API Keys and if they are active users
+ for user in self.__users:
+ if user['external_identifier'] is None and user['lifecycle_state'] == 'ACTIVE' and not (user['email_verified']):
+ self.cis_foundations_benchmark_1_2['1.12']['Status'] = False
+ self.cis_foundations_benchmark_1_2['1.12']['Findings'].append(
+ user)
+
+ # CIS Total 1.12 Adding - All IAM Users for to CIS Total
+ self.cis_foundations_benchmark_1_2['1.12']['Total'] = self.__users
+
+ # CIS 1.13 Check - Ensure Dynamic Groups are used for OCI instances, OCI Cloud Databases and OCI Function to access OCI resources
+ # Iterating through all dynamic groups ensure there are some for fnfunc, instance or autonomous. Using reverse logic so starts as a false
+ for dynamic_group in self.__dynamic_groups:
+ if any(oci_resource.upper() in str(dynamic_group['matching_rule'].upper()) for oci_resource in self.cis_iam_checks['1.13']['resources']):
+ self.cis_foundations_benchmark_1_2['1.13']['Status'] = True
+ else:
+ self.cis_foundations_benchmark_1_2['1.13']['Findings'].append(
+ dynamic_group)
+ # Clearing finding
+ if self.cis_foundations_benchmark_1_2['1.13']['Status']:
+ self.cis_foundations_benchmark_1_2['1.13']['Findings'] = []
+
+ # CIS Total 1.13 Adding - All Dynamic Groups for to CIS Total
+ self.cis_foundations_benchmark_1_2['1.13']['Total'] = self.__dynamic_groups
+
+ # CIS 1.14 Check - Ensure storage service-level admins cannot delete resources they manage.
+ # Iterating through all policies
+ for policy in self.__policies:
+ if policy['name'].upper() != "Tenant Admin Policy".upper() and policy['name'].upper() != "PSM-root-policy":
+ for statement in policy['statements']:
+ for resource in self.cis_iam_checks['1.14']:
+ if "allow group".upper() in statement.upper() and "manage".upper() in statement.upper() and resource.upper() in statement.upper():
+ split_statement = statement.split("where")
+ if len(split_statement) == 2:
+ clean_where_clause = split_statement[1].upper().replace(" ", "").replace("'", "")
+ if all(permission.upper() in clean_where_clause for permission in self.cis_iam_checks['1.14'][resource]) and \
+ not(all(permission.upper() in clean_where_clause for permission in self.cis_iam_checks['1.14-storage-admin'][resource])):
+ debug("__report_cis_analyze_tenancy_data no permissions to delete storage : " + str(policy['name']))
+
+ pass
+ # Checking if this is the Storage admin with allowed
+ elif all(permission.upper() in clean_where_clause for permission in self.cis_iam_checks['1.14-storage-admin'][resource]) and \
+ not(all(permission.upper() in clean_where_clause for permission in self.cis_iam_checks['1.14'][resource])):
+ debug("__report_cis_analyze_tenancy_data storage admin policy is : " + str(policy['name']))
+ pass
+ else:
+ self.cis_foundations_benchmark_1_2['1.14']['Findings'].append(policy)
+ debug("__report_cis_analyze_tenancy_data else policy is /n: " + str(policy['name']))
+
+ else:
+ self.cis_foundations_benchmark_1_2['1.14']['Findings'].append(policy)
+
+ if self.cis_foundations_benchmark_1_2['1.14']['Findings']:
+ self.cis_foundations_benchmark_1_2['1.14']['Status'] = False
+ else:
+ self.cis_foundations_benchmark_1_2['1.14']['Status'] = True
+
+ # CIS Total 1.14 Adding - All IAM Policies for to CIS Total
+ self.cis_foundations_benchmark_1_2['1.14']['Total'] = self.__policies
+
+ # CIS 2.1, 2.2, & 2.5 Check - Security List Ingress from 0.0.0.0/0 on ports 22, 3389
+ for sl in self.__network_security_lists:
+ for irule in sl['ingress_security_rules']:
+ if irule['source'] == "0.0.0.0/0" and irule['protocol'] == '6':
+ if irule['tcp_options'] and irule['tcp_options']['destinationPortRange']:
+ port_min = irule['tcp_options']['destinationPortRange']['min']
+ port_max = irule['tcp_options']['destinationPortRange']['max']
+ ports_range = range(port_min, port_max + 1)
+ if 22 in ports_range:
+ self.cis_foundations_benchmark_1_2['2.1']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.1']['Findings'].append(sl)
+ if 3389 in ports_range:
+ self.cis_foundations_benchmark_1_2['2.2']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.2']['Findings'].append(sl)
+ break
+ else:
+ # If TCP Options is null it includes all ports
+ self.cis_foundations_benchmark_1_2['2.1']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.1']['Findings'].append(sl)
+ self.cis_foundations_benchmark_1_2['2.2']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.2']['Findings'].append(sl)
+ break
+ elif irule['source'] == "0.0.0.0/0" and irule['protocol'] == 'all':
+ # All Protocols allowed included TCP and all ports
+ self.cis_foundations_benchmark_1_2['2.1']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.1']['Findings'].append(sl)
+ self.cis_foundations_benchmark_1_2['2.2']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.2']['Findings'].append(sl)
+ break
+
+ # CIS Total 2.1, 2.2 Adding - All SLs for to CIS Total
+ self.cis_foundations_benchmark_1_2['2.1']['Total'] = self.__network_security_lists
+ self.cis_foundations_benchmark_1_2['2.2']['Total'] = self.__network_security_lists
+
+ # CIS 2.5 Check - any rule with 0.0.0.0 where protocol not 1 (ICMP)
+ # CIS Total 2.5 Adding - All Default Security List for to CIS Total
+ for sl in self.__network_security_lists:
+ if sl['display_name'].startswith("Default Security List for "):
+ self.cis_foundations_benchmark_1_2['2.5']['Total'].append(sl)
+ for irule in sl['ingress_security_rules']:
+ if irule['source'] == "0.0.0.0/0" and irule['protocol'] != '1':
+ self.cis_foundations_benchmark_1_2['2.5']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.5']['Findings'].append(
+ sl)
+ break
+
+ # CIS 2.3 and 2.4 Check - Network Security Groups Ingress from 0.0.0.0/0 on ports 22, 3389
+ for nsg in self.__network_security_groups:
+ for rule in nsg['rules']:
+ if rule['source'] == "0.0.0.0/0" and rule['protocol'] == '6':
+ if rule['tcp_options'] and rule['tcp_options'].destination_port_range:
+ port_min = rule['tcp_options'].destination_port_range.min
+ port_max = rule['tcp_options'].destination_port_range.max
+ ports_range = range(port_min, port_max + 1)
+ if 22 in ports_range:
+ self.cis_foundations_benchmark_1_2['2.3']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.3']['Findings'].append(
+ nsg)
+ if 3389 in ports_range:
+ self.cis_foundations_benchmark_1_2['2.4']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.4']['Findings'].append(nsg)
+ break
+ else:
+ # If TCP Options is null it includes all ports
+ self.cis_foundations_benchmark_1_2['2.3']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.3']['Findings'].append(nsg)
+ self.cis_foundations_benchmark_1_2['2.4']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.4']['Findings'].append(nsg)
+ break
+ elif rule['source'] == "0.0.0.0/0" and rule['protocol'] == 'all':
+ # All Protocols allowed included TCP and all ports
+ self.cis_foundations_benchmark_1_2['2.3']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.3']['Findings'].append(nsg)
+ self.cis_foundations_benchmark_1_2['2.4']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.4']['Findings'].append(nsg)
+ break
+
+ # CIS Total 2.2 & 2.4 Adding - All NSGs Instances to CIS Total
+ self.cis_foundations_benchmark_1_2['2.3']['Total'] = self.__network_security_groups
+ self.cis_foundations_benchmark_1_2['2.4']['Total'] = self.__network_security_groups
+
+ # CIS 2.6 - Ensure Oracle Integration Cloud (OIC) access is restricted to allowed sources
+ # Iterating through OIC instance have network access rules and ensure 0.0.0.0/0 is not in the list
+ for integration_instance in self.__integration_instances:
+ if not (integration_instance['network_endpoint_details']):
+ self.cis_foundations_benchmark_1_2['2.6']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.6']['Findings'].append(
+ integration_instance)
+ elif integration_instance['network_endpoint_details']:
+ if "0.0.0.0/0" in str(integration_instance['network_endpoint_details']):
+ self.cis_foundations_benchmark_1_2['2.6']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.6']['Findings'].append(
+ integration_instance)
+
+ # CIS Total 2.6 Adding - All OIC Instances to CIS Total
+ self.cis_foundations_benchmark_1_2['2.6']['Total'] = self.__integration_instances
+
+ # CIS 2.7 - Ensure Oracle Analytics Cloud (OAC) access is restricted to allowed sources or deployed within a VCN
+ for analytics_instance in self.__analytics_instances:
+ if analytics_instance['network_endpoint_type'].upper() == 'PUBLIC':
+ if not (analytics_instance['network_endpoint_details'].whitelisted_ips):
+ self.cis_foundations_benchmark_1_2['2.7']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.7']['Findings'].append(analytics_instance)
+
+ elif "0.0.0.0/0" in analytics_instance['network_endpoint_details'].whitelisted_ips:
+ self.cis_foundations_benchmark_1_2['2.7']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.7']['Findings'].append(
+ analytics_instance)
+
+ # CIS Total 2.7 Adding - All OAC Instances to CIS Total
+ self.cis_foundations_benchmark_1_2['2.7']['Total'] = self.__analytics_instances
+
+ # CIS 2.8 Check - Ensure Oracle Autonomous Shared Databases (ADB) access is restricted to allowed sources or deployed within a VCN
+ # Iterating through ADB Checking for null NSGs, whitelisted ip or allowed IPs 0.0.0.0/0
+ # Issue 295 fixed
+ for autonomous_database in self.__autonomous_databases:
+ if autonomous_database['lifecycle_state'] not in [ oci.database.models.AutonomousDatabaseSummary.LIFECYCLE_STATE_TERMINATED, oci.database.models.AutonomousDatabaseSummary.LIFECYCLE_STATE_TERMINATING, oci.database.models.AutonomousDatabaseSummary.LIFECYCLE_STATE_UNAVAILABLE ]:
+ if not (autonomous_database['whitelisted_ips']) and not (autonomous_database['subnet_id']):
+ self.cis_foundations_benchmark_1_2['2.8']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.8']['Findings'].append(
+ autonomous_database)
+ elif autonomous_database['whitelisted_ips']:
+ for value in autonomous_database['whitelisted_ips']:
+ if '0.0.0.0/0' in str(autonomous_database['whitelisted_ips']):
+ self.cis_foundations_benchmark_1_2['2.8']['Status'] = False
+ self.cis_foundations_benchmark_1_2['2.8']['Findings'].append(
+ autonomous_database)
+
+ # CIS Total 2.8 Adding - All ADBs to CIS Total
+ self.cis_foundations_benchmark_1_2['2.8']['Total'] = self.__autonomous_databases
+
+ # CIS 3.1 Check - Ensure Audit log retention == 365 - Only checking in home region
+ if self.__audit_retention_period >= 365:
+ self.cis_foundations_benchmark_1_2['3.1']['Status'] = True
+
+ # CIS Check 3.2 - Check for Default Tags in Root Compartment
+ # Iterate through tags looking for ${iam.principal.name}
+ for tag in self.__tag_defaults:
+ if tag['value'] == "${iam.principal.name}":
+ self.cis_foundations_benchmark_1_2['3.2']['Status'] = True
+
+ # CIS Total 3.2 Adding - All Tag Defaults to CIS Total
+ self.cis_foundations_benchmark_1_2['3.2']['Total'] = self.__tag_defaults
+
+ # CIS Check 3.3 - Check for Active Notification and Subscription
+ if len(self.__subscriptions) > 0:
+ self.cis_foundations_benchmark_1_2['3.3']['Status'] = True
+
+ # CIS Check 3.2 Total - All Subscriptions to CIS Total
+ self.cis_foundations_benchmark_1_2['3.3']['Total'] = self.__subscriptions
+
+ # CIS Checks 3.4 - 3.13
+ # Iterate through all event rules
+ for event in self.__event_rules:
+ # Convert Event Condition to dict
+ jsonable_str = event['condition'].lower().replace("'", "\"")
+ try:
+ event_dict = json.loads(jsonable_str)
+ except Exception:
+ print("*** Invalid Event Condition for event (not in JSON format): " + event['display_name'] + " ***")
+ event_dict = {}
+ # Issue 256: 'eventtype' not in event_dict (i.e. missing in event condition)
+ if event_dict and 'eventtype' in event_dict:
+ for key, changes in self.cis_monitoring_checks.items():
+ # Checking if all cis change list is a subset of event condition
+ try:
+ if (all(x in event_dict['eventtype'] for x in changes)):
+ self.cis_foundations_benchmark_1_2[key]['Status'] = True
+ except Exception:
+ print("*** Invalid Event Data for event: " + event['display_name'] + " ***")
+
+ # CIS Check 3.14 - VCN FlowLog enable
+ # Generate list of subnets IDs
+ for subnet in self.__network_subnets:
+ if not (subnet['id'] in self.__subnet_logs):
+ self.cis_foundations_benchmark_1_2['3.14']['Status'] = False
+ self.cis_foundations_benchmark_1_2['3.14']['Findings'].append(
+ subnet)
+
+ # CIS Check 3.14 Total - Adding All Subnets to total
+ self.cis_foundations_benchmark_1_2['3.14']['Total'] = self.__network_subnets
+
+ # CIS Check 3.15 - Cloud Guard enabled
+ debug("__report_cis_analyze_tenancy_data Cloud Guard Check: " + str(self.__cloud_guard_config_status))
+ if self.__cloud_guard_config_status == 'ENABLED':
+ self.cis_foundations_benchmark_1_2['3.15']['Status'] = True
+ else:
+ self.cis_foundations_benchmark_1_2['3.15']['Status'] = False
+
+ # CIS Check 3.16 - Encryption keys over 365
+ # Generating list of keys
+ for vault in self.__vaults:
+ for key in vault['keys']:
+ if self.kms_key_time_max_datetime >= datetime.datetime.strptime(key['time_created'], self.__iso_time_format):
+ self.cis_foundations_benchmark_1_2['3.16']['Status'] = False
+ self.cis_foundations_benchmark_1_2['3.16']['Findings'].append(
+ key)
+
+ # CIS Check 3.16 Total - Adding Key to total
+ self.cis_foundations_benchmark_1_2['3.16']['Total'].append(key)
+
+ # CIS Check 3.17 - Object Storage with Logs
+ # Generating list of buckets names
+
+ for bucket in self.__buckets:
+ if not (bucket['name'] in self.__write_bucket_logs):
+ self.cis_foundations_benchmark_1_2['3.17']['Status'] = False
+ self.cis_foundations_benchmark_1_2['3.17']['Findings'].append(
+ bucket)
+
+ # CIS Check 3.17 Total - Adding All Buckets to total
+ self.cis_foundations_benchmark_1_2['3.17']['Total'] = self.__buckets
+
+ # CIS Section 4.1 Bucket Checks
+ # Generating list of buckets names
+ for bucket in self.__buckets:
+ if 'public_access_type' in bucket:
+ if bucket['public_access_type'] != 'NoPublicAccess':
+ self.cis_foundations_benchmark_1_2['4.1.1']['Status'] = False
+ self.cis_foundations_benchmark_1_2['4.1.1']['Findings'].append(
+ bucket)
+
+ if 'kms_key_id' in bucket:
+ if not (bucket['kms_key_id']):
+ self.cis_foundations_benchmark_1_2['4.1.2']['Findings'].append(
+ bucket)
+ self.cis_foundations_benchmark_1_2['4.1.2']['Status'] = False
+
+ if 'versioning' in bucket:
+ if bucket['versioning'] != "Enabled":
+ self.cis_foundations_benchmark_1_2['4.1.3']['Findings'].append(
+ bucket)
+ self.cis_foundations_benchmark_1_2['4.1.3']['Status'] = False
+
+ # CIS Check 4.1.1,4.1.2,4.1.3 Total - Adding All Buckets to total
+ self.cis_foundations_benchmark_1_2['4.1.1']['Total'] = self.__buckets
+ self.cis_foundations_benchmark_1_2['4.1.2']['Total'] = self.__buckets
+ self.cis_foundations_benchmark_1_2['4.1.3']['Total'] = self.__buckets
+
+ # CIS Section 4.2.1 Block Volume Checks
+ # Generating list of block volumes names
+ for volume in self.__block_volumes:
+ if 'kms_key_id' in volume:
+ if not (volume['kms_key_id']):
+ self.cis_foundations_benchmark_1_2['4.2.1']['Findings'].append(
+ volume)
+ self.cis_foundations_benchmark_1_2['4.2.1']['Status'] = False
+
+ # CIS Check 4.2.1 Total - Adding All Block Volumes to total
+ self.cis_foundations_benchmark_1_2['4.2.1']['Total'] = self.__block_volumes
+
+ # CIS Section 4.2.2 Boot Volume Checks
+ # Generating list of boot names
+ for boot_volume in self.__boot_volumes:
+ if 'kms_key_id' in boot_volume:
+ if not (boot_volume['kms_key_id']):
+ self.cis_foundations_benchmark_1_2['4.2.2']['Findings'].append(
+ boot_volume)
+ self.cis_foundations_benchmark_1_2['4.2.2']['Status'] = False
+
+ # CIS Check 4.2.2 Total - Adding All Block Volumes to total
+ self.cis_foundations_benchmark_1_2['4.2.2']['Total'] = self.__boot_volumes
+
+ # CIS Section 4.3.1 FSS Checks
+ # Generating list of FSS names
+ for file_system in self.__file_storage_system:
+ if 'kms_key_id' in file_system:
+ if not (file_system['kms_key_id']):
+ self.cis_foundations_benchmark_1_2['4.3.1']['Findings'].append(
+ file_system)
+ self.cis_foundations_benchmark_1_2['4.3.1']['Status'] = False
+
+ # CIS Check 4.3.1 Total - Adding All Block Volumes to total
+ self.cis_foundations_benchmark_1_2['4.3.1']['Total'] = self.__file_storage_system
+
+ # CIS Section 5 Checks
+ # Checking if more than one compartment because of the ManagedPaaS Compartment
+ if len(self.__compartments) < 2:
+ self.cis_foundations_benchmark_1_2['5.1']['Status'] = False
+
+ if len(self.__resources_in_root_compartment) > 0:
+ for item in self.__resources_in_root_compartment:
+ self.cis_foundations_benchmark_1_2['5.2']['Status'] = False
+ self.cis_foundations_benchmark_1_2['5.2']['Findings'].append(
+ item)
+
+ ##########################################################################
+ # Recursive function the gets the child compartments of a compartment
+ ##########################################################################
+
+ def __get_children(self, parent, compartments):
+ try:
+ kids = compartments[parent]
+ except Exception:
+ kids = []
+
+ if kids:
+ for kid in compartments[parent]:
+ kids = kids + self.__get_children(kid, compartments)
+
+ return kids
+
+ ##########################################################################
+ # Analyzes Tenancy Data for Oracle Best Practices Report
+ ##########################################################################
+ def __obp_analyze_tenancy_data(self):
+
+ #######################################
+ # Budget Checks
+ #######################################
+ # Determines if a Budget Exists with an alert rule
+ if len(self.__budgets) > 0:
+ for budget in self.__budgets:
+ if budget['alert_rule_count'] > 0 and budget['target_compartment_id'] == self.__tenancy.id:
+ self.obp_foundations_checks['Cost_Tracking_Budgets']['Status'] = True
+ self.obp_foundations_checks['Cost_Tracking_Budgets']['OBP'].append(budget)
+ else:
+ self.obp_foundations_checks['Cost_Tracking_Budgets']['Findings'].append(budget)
+
+ # Stores Regional Checks
+ for region_key, region_values in self.__regions.items():
+ self.__obp_regional_checks[region_key] = {
+ "Audit": {
+ "tenancy_level_audit": False,
+ "tenancy_level_include_sub_comps": False,
+ "compartments": [],
+ "findings": []
+ },
+ "VCN": {
+ "subnets": [],
+ "findings": []
+ },
+ "Write_Bucket": {
+ "buckets": [],
+ "findings": []
+ },
+ "Read_Bucket": {
+ "buckets": [],
+ "findings": []
+ },
+ "Network_Connectivity": {
+ "drgs": [],
+ "findings": [],
+ "status": False
+ },
+ }
+
+ #######################################
+ # OCI Audit Log Compartments Checks
+ #######################################
+ list_of_all_compartments = []
+ dict_of_compartments = {}
+ for compartment in self.__compartments:
+ list_of_all_compartments.append(compartment.id)
+
+ # Building a Hash Table of Parent Child Hieracrchy for Audit
+ dict_of_compartments = {}
+ for compartment in self.__compartments:
+ if "tenancy" not in compartment.id:
+ try:
+ dict_of_compartments[compartment.compartment_id].append(compartment.id)
+ except Exception:
+ dict_of_compartments[compartment.compartment_id] = []
+ dict_of_compartments[compartment.compartment_id].append(compartment.id)
+
+ # This is used for comparing compartments that are audit to the full list of compartments
+ set_of_all_compartments = set(list_of_all_compartments)
+
+ # Collecting Servie Connectors Logs related to compartments
+ for sch_id, sch_values in self.__service_connectors.items():
+ # Only Active SCH with a target that is configured
+ if sch_values['lifecycle_state'].upper() == "ACTIVE" and sch_values['target_kind']:
+ for source in sch_values['log_sources']:
+ try:
+ # Checking if a the compartment being logged is the Tenancy and it has all child compartments
+ if source['compartment_id'] == self.__tenancy.id and source['log_group_id'].upper() == "_Audit_Include_Subcompartment".upper():
+ self.__obp_regional_checks[sch_values['region']]['Audit']['tenancy_level_audit'] = True
+ self.__obp_regional_checks[sch_values['region']]['Audit']['tenancy_level_include_sub_comps'] = True
+
+ # Since it is not the Tenancy we should add the compartment to the list and check if sub compartment are included
+ elif source['log_group_id'].upper() == "_Audit_Include_Subcompartment".upper():
+ self.__obp_regional_checks[sch_values['region']]['Audit']['compartments'] += self.__get_children(source['compartment_id'], dict_of_compartments)
+ elif source['log_group_id'].upper() == "_Audit".upper():
+ self.__obp_regional_checks[sch_values['region']]['Audit']['compartments'].append(source['compartment_id'])
+ except Exception:
+ # There can be empty log groups
+ pass
+ # Analyzing Service Connector Audit Logs to see if each region has all compartments
+ for region_key, region_values in self.__obp_regional_checks.items():
+ # Checking if I already found the tenancy ocid with all child compartments included
+ if not region_values['Audit']['tenancy_level_audit']:
+ audit_findings = set_of_all_compartments - set(region_values['Audit']['compartments'])
+ # If there are items in the then it is not auditing everything in the tenancy
+ if audit_findings:
+ region_values['Audit']['findings'] += list(audit_findings)
+ else:
+ region_values['Audit']['tenancy_level_audit'] = True
+ region_values['Audit']['findings'] = []
+
+ # Consolidating Audit findings into the OBP Checks
+ for region_key, region_values in self.__obp_regional_checks.items():
+ # If this flag is set all compartments are not logged in region
+ if not region_values['Audit']['tenancy_level_audit']:
+ self.obp_foundations_checks['SIEM_Audit_Log_All_Comps']['Status'] = False
+
+ # If this flag is set the region has the tenancy logging and all sub compartments flag checked
+ if not region_values['Audit']['tenancy_level_include_sub_comps']:
+ self.obp_foundations_checks['SIEM_Audit_Incl_Sub_Comp']['Status'] = False
+ self.obp_foundations_checks['SIEM_Audit_Incl_Sub_Comp']['Findings'].append({"region_name": region_key})
+ else:
+ self.obp_foundations_checks['SIEM_Audit_Incl_Sub_Comp']['OBP'].append({"region_name": region_key})
+
+ # Compartment Logs that are missed in the region
+ for compartment in region_values['Audit']['findings']:
+ try:
+ finding = list(filter(lambda source: source['id'] == compartment, self.__raw_compartment))[0]
+ record = {
+ "id": finding['id'],
+ "name": finding['name'],
+ "deep_link": finding['deep_link'],
+ "compartment_id": finding['compartment_id'],
+ "defined_tags": finding['defined_tags'],
+ "description": finding['description'],
+ "freeform_tags": finding['freeform_tags'],
+ "inactive_status": finding['inactive_status'],
+ "is_accessible": finding['is_accessible'],
+ "lifecycle_state": finding['lifecycle_state'],
+ "time_created": finding['time_created'],
+ "region": region_key
+ }
+ except Exception as e:
+ record = {
+ "id": compartment,
+ "name": "Compartment No Longer Exists",
+ "deep_link": "",
+ "compartment_id": "",
+ "defined_tags": "",
+ "description": str(e),
+ "freeform_tags": "",
+ "inactive_status": "",
+ "is_accessible": "",
+ "lifecycle_state": "",
+ "time_created": "",
+ "region": region_key
+ }
+ # Need to check for duplicates before adding the record
+ exists_already = list(filter(lambda source: source['id'] == record['id'] and source['region'] == record['region'], self.obp_foundations_checks['SIEM_Audit_Log_All_Comps']['Findings']))
+ if not exists_already:
+ self.obp_foundations_checks['SIEM_Audit_Log_All_Comps']['Findings'].append(record)
+
+ # Compartment logs that are not missed in the region
+ for compartment in region_values['Audit']['compartments']:
+ try:
+ finding = list(filter(lambda source: source['id'] == compartment, self.__raw_compartment))[0]
+ record = {
+ "id": finding['id'],
+ "name": finding['name'],
+ "deep_link": finding['deep_link'],
+ "compartment_id": finding['compartment_id'],
+ "defined_tags": finding['defined_tags'],
+ "description": finding['description'],
+ "freeform_tags": finding['freeform_tags'],
+ "inactive_status": finding['inactive_status'],
+ "is_accessible": finding['is_accessible'],
+ "lifecycle_state": finding['lifecycle_state'],
+ "time_created": finding['time_created'],
+ "region": region_key
+ }
+ except Exception as e:
+ record = {
+ "id": compartment,
+ "name": "Compartment No Longer Exists",
+ "deep_link": "",
+ "compartment_id": "",
+ "defined_tags": "",
+ "description": str(e),
+ "freeform_tags": "",
+ "inactive_status": "",
+ "is_accessible": "",
+ "lifecycle_state": "",
+ "time_created": "",
+ "region": region_key
+ }
+ # Need to check for duplicates before adding the record
+ exists_already = list(filter(lambda source: source['id'] == record['id'] and source['region'] == record['region'], self.obp_foundations_checks['SIEM_Audit_Log_All_Comps']['OBP']))
+ if not exists_already:
+ self.obp_foundations_checks['SIEM_Audit_Log_All_Comps']['OBP'].append(record)
+
+ #######################################
+ # Subnet and Bucket Log Checks
+ #######################################
+ for sch_id, sch_values in self.__service_connectors.items():
+ # Only Active SCH with a target that is configured
+ if sch_values['lifecycle_state'].upper() == "ACTIVE" and sch_values['target_kind']:
+ # Subnet Logs Checks
+ for subnet_id, log_values in self.__subnet_logs.items():
+
+ log_id = log_values['log_id']
+ log_group_id = log_values['log_group_id']
+ log_record = {"sch_id": sch_id, "sch_name": sch_values['display_name'], "id": subnet_id}
+
+ subnet_log_group_in_sch = list(filter(lambda source: source['log_group_id'] == log_group_id, sch_values['log_sources']))
+ subnet_log_in_sch = list(filter(lambda source: source['log_id'] == log_id, sch_values['log_sources']))
+
+ # Checking if the Subnets's log group in is in SCH's log sources & the log_id is empty so it covers everything in the log group
+ if subnet_log_group_in_sch and not (subnet_log_in_sch):
+ self.__obp_regional_checks[sch_values['region']]['VCN']['subnets'].append(log_record)
+
+ # Checking if the Subnet's log id in is in the service connector's log sources if so I will add it
+ elif subnet_log_in_sch:
+ self.__obp_regional_checks[sch_values['region']]['VCN']['subnets'].append(log_record)
+
+ # else:
+ # self.__obp_regional_checks[sch_values['region']]['VCN']['findings'].append(subnet_id)
+
+ # Bucket Write Logs Checks
+ for bucket_name, log_values in self.__write_bucket_logs.items():
+ log_id = log_values['log_id']
+ log_group_id = log_values['log_group_id']
+ log_record = {"sch_id": sch_id, "sch_name": sch_values['display_name'], "id": bucket_name}
+ log_region = log_values['region']
+
+ bucket_log_group_in_sch = list(filter(lambda source: source['log_group_id'] == log_group_id and sch_values['region'] == log_region, sch_values['log_sources']))
+ bucket_log_in_sch = list(filter(lambda source: source['log_id'] == log_id and sch_values['region'] == log_region, sch_values['log_sources']))
+
+ # Checking if the Bucket's log group in is in SCH's log sources & the log_id is empty so it covers everything in the log group
+ if bucket_log_group_in_sch and not (bucket_log_in_sch):
+ self.__obp_regional_checks[sch_values['region']]['Write_Bucket']['buckets'].append(log_record)
+
+ # Checking if the Bucket's log Group in is in the service connector's log sources if so I will add it
+ elif bucket_log_in_sch:
+ self.__obp_regional_checks[sch_values['region']]['Write_Bucket']['buckets'].append(log_record)
+
+ # else:
+ # self.__obp_regional_checks[sch_values['region']]['Write_Bucket']['findings'].append(bucket_name)
+
+ # Bucket Read Log Checks
+
+ for bucket_name, log_values in self.__read_bucket_logs.items():
+ log_id = log_values['log_id']
+ log_group_id = log_values['log_group_id']
+ log_record = {"sch_id": sch_id, "sch_name": sch_values['display_name'], "id": bucket_name}
+
+ log_region = log_values['region']
+
+ bucket_log_group_in_sch = list(filter(lambda source: source['log_group_id'] == log_group_id and sch_values['region'] == log_region, sch_values['log_sources']))
+ bucket_log_in_sch = list(filter(lambda source: source['log_id'] == log_id and sch_values['region'] == log_region, sch_values['log_sources']))
+
+ # Checking if the Bucket's log group in is in SCH's log sources & the log_id is empty so it covers everything in the log group
+ if bucket_log_group_in_sch and not (bucket_log_in_sch):
+ self.__obp_regional_checks[sch_values['region']]['Read_Bucket']['buckets'].append(log_record)
+
+ # Checking if the Bucket's log id in is in the service connector's log sources if so I will add it
+ elif bucket_log_in_sch:
+ self.__obp_regional_checks[sch_values['region']]['Read_Bucket']['buckets'].append(log_record)
+
+ # Consolidating regional SERVICE LOGGING findings into centralized finding report
+ for region_key, region_values in self.__obp_regional_checks.items():
+
+ for finding in region_values['VCN']['subnets']:
+ logged_subnet = list(filter(lambda subnet: subnet['id'] == finding['id'], self.__network_subnets))
+ # Checking that the subnet has not already been written to OBP
+ existing_finding = list(filter(lambda subnet: subnet['id'] == finding['id'], self.obp_foundations_checks['SIEM_VCN_Flow_Logging']['OBP']))
+ if len(logged_subnet) != 0:
+ record = logged_subnet[0].copy()
+ record['sch_id'] = finding['sch_id']
+ record['sch_name'] = finding['sch_name']
+
+ if logged_subnet and not (existing_finding):
+ self.obp_foundations_checks['SIEM_VCN_Flow_Logging']['OBP'].append(record)
+ # else:
+ # print("Found this subnet being logged but the subnet does not exist: " + str(finding))
+
+ for finding in region_values['Write_Bucket']['buckets']:
+ logged_bucket = list(filter(lambda bucket: bucket['name'] == finding['id'], self.__buckets))
+ if len(logged_bucket) != 0:
+ record = logged_bucket[0].copy()
+ record['sch_id'] = finding['sch_id']
+ record['sch_name'] = finding['sch_name']
+
+ if logged_bucket:
+ self.obp_foundations_checks['SIEM_Write_Bucket_Logs']['OBP'].append(record)
+
+ for finding in region_values['Read_Bucket']['buckets']:
+ logged_bucket = list(filter(lambda bucket: bucket['name'] == finding['id'], self.__buckets))
+ if len(logged_bucket) != 0:
+ record = logged_bucket[0].copy()
+ record['sch_id'] = finding['sch_id']
+ record['sch_name'] = finding['sch_name']
+
+ if logged_bucket:
+ self.obp_foundations_checks['SIEM_Read_Bucket_Logs']['OBP'].append(record)
+
+ # Finding looking at all buckets and seeing if they meet one of the OBPs in one of the regions
+ for finding in self.__buckets:
+ read_logged_bucket = list(filter(lambda bucket: bucket['name'] == finding['name'] and bucket['region'] == finding['region'], self.obp_foundations_checks['SIEM_Read_Bucket_Logs']['OBP']))
+ if not (read_logged_bucket):
+ self.obp_foundations_checks['SIEM_Read_Bucket_Logs']['Findings'].append(finding)
+
+ write_logged_bucket = list(filter(lambda bucket: bucket['name'] == finding['name'] and bucket['region'] == finding['region'], self.obp_foundations_checks['SIEM_Write_Bucket_Logs']['OBP']))
+ if not (write_logged_bucket):
+ self.obp_foundations_checks['SIEM_Write_Bucket_Logs']['Findings'].append(finding)
+
+ # Finding looking at all subnet and seeing if they meet one of the OBPs in one of the regions
+ for finding in self.__network_subnets:
+ logged_subnet = list(filter(lambda subnet: subnet['id'] == finding['id'], self.obp_foundations_checks['SIEM_VCN_Flow_Logging']['OBP']))
+ if not (logged_subnet):
+ self.obp_foundations_checks['SIEM_VCN_Flow_Logging']['Findings'].append(finding)
+
+ # Setting VCN Flow Logs Findings
+ if self.obp_foundations_checks['SIEM_VCN_Flow_Logging']['Findings']:
+ self.obp_foundations_checks['SIEM_VCN_Flow_Logging']['Status'] = False
+
+ else:
+ self.obp_foundations_checks['SIEM_VCN_Flow_Logging']['Status'] = True
+
+ # Setting Write Bucket Findings
+ if self.obp_foundations_checks['SIEM_Write_Bucket_Logs']['Findings']:
+ self.obp_foundations_checks['SIEM_Write_Bucket_Logs']['Status'] = False
+
+ elif not self.__service_connectors:
+ # If there are no service connectors then by default all buckets are not logged
+ self.obp_foundations_checks['SIEM_Write_Bucket_Logs']['Status'] = False
+ self.obp_foundations_checks['SIEM_Write_Bucket_Logs']['Findings'] += self.__buckets
+
+ else:
+ self.obp_foundations_checks['SIEM_Write_Bucket_Logs']['Status'] = True
+
+ # Setting Read Bucket Findings
+ if self.obp_foundations_checks['SIEM_Read_Bucket_Logs']['Findings']:
+ self.obp_foundations_checks['SIEM_Read_Bucket_Logs']['Status'] = False
+
+ elif not self.__service_connectors:
+ # If there are no service connectors then by default all buckets are not logged
+ self.obp_foundations_checks['SIEM_Read_Bucket_Logs']['Status'] = False
+ self.obp_foundations_checks['SIEM_Read_Bucket_Logs']['Findings'] += self.__buckets
+ else:
+ self.obp_foundations_checks['SIEM_Read_Bucket_Logs']['Status'] = True
+
+ #######################################
+ # OBP Networking Checks
+ #######################################
+
+ # Fast Connect Connections
+
+ for drg_id, drg_values in self.__network_drg_attachments.items():
+ number_of_valid_connected_vcns = 0
+ number_of_valid_fast_connect_circuits = 0
+ number_of_valid_site_to_site_connection = 0
+
+ fast_connect_providers = set()
+ customer_premises_equipment = set()
+
+ for attachment in drg_values:
+ if attachment['network_type'].upper() == 'VCN':
+ # Checking if DRG has a valid VCN attached to it
+ number_of_valid_connected_vcns += 1
+
+ elif attachment['network_type'].upper() == 'IPSEC_TUNNEL':
+ # Checking if the IPSec Connection has both tunnels up
+ for ipsec_connection in self.__network_ipsec_connections[drg_id]:
+ if ipsec_connection['tunnels_up']:
+ # Good IP Sec Connection increment valid site to site and track CPEs
+ customer_premises_equipment.add(ipsec_connection['cpe_id'])
+ number_of_valid_site_to_site_connection += 1
+
+ elif attachment['network_type'].upper() == 'VIRTUAL_CIRCUIT':
+
+ # Checking for Provision and BGP enabled Virtual Circuits and that it is associated
+ for virtual_circuit in self.__network_fastconnects[attachment['drg_id']]:
+ if attachment['network_id'] == virtual_circuit['id']:
+ if virtual_circuit['lifecycle_state'].upper() == 'PROVISIONED' and virtual_circuit['bgp_session_state'].upper() == "UP":
+ # Good VC to increment number of VCs and append the provider name
+ fast_connect_providers.add(virtual_circuit['provider_name'])
+ number_of_valid_fast_connect_circuits += 1
+
+ try:
+ record = {
+ "drg_id": drg_id,
+ "drg_display_name": self.__network_drgs[drg_id]['display_name'],
+ "region": self.__network_drgs[drg_id]['region'],
+ "number_of_connected_vcns": number_of_valid_connected_vcns,
+ "number_of_customer_premises_equipment": len(customer_premises_equipment),
+ "number_of_connected_ipsec_connections": number_of_valid_site_to_site_connection,
+ "number_of_fastconnects_cicruits": number_of_valid_fast_connect_circuits,
+ "number_of_fastconnect_providers": len(fast_connect_providers),
+ }
+ except Exception:
+ record = {
+ "drg_id": drg_id,
+ "drg_display_name": "Deleted with an active attachement",
+ "region": attachment['region'],
+ "number_of_connected_vcns": 0,
+ "number_of_customer_premises_equipment": 0,
+ "number_of_connected_ipsec_connections": 0,
+ "number_of_fastconnects_cicruits": 0,
+ "number_of_fastconnect_providers": 0,
+ }
+ print(f"This DRG: {drg_id} is deleted with an active attachement: {attachment['display_name']}")
+
+ # Checking if the DRG and connected resourcs are aligned with best practices
+ # One attached VCN, One VPN connection and one fast connect
+ if number_of_valid_connected_vcns and number_of_valid_site_to_site_connection and number_of_valid_fast_connect_circuits:
+ self.__obp_regional_checks[record['region']]["Network_Connectivity"]["drgs"].append(record)
+ self.__obp_regional_checks[record['region']]["Network_Connectivity"]["status"] = True
+ # Two VPN site to site connections to seperate CPEs
+ elif number_of_valid_connected_vcns and number_of_valid_site_to_site_connection and len(customer_premises_equipment) >= 2:
+ self.__obp_regional_checks[record['region']]["Network_Connectivity"]["drgs"].append(record)
+ self.__obp_regional_checks[record['region']]["Network_Connectivity"]["status"] = True
+ # Two FastConnects from Different providers
+ elif number_of_valid_connected_vcns and number_of_valid_fast_connect_circuits and len(fast_connect_providers) >= 2:
+ self.__obp_regional_checks[record['region']]["Network_Connectivity"]["drgs"].append(record)
+ self.__obp_regional_checks[record['region']]["Network_Connectivity"]["status"] = True
+ else:
+ self.__obp_regional_checks[record['region']]["Network_Connectivity"]["findings"].append(record)
+
+ # Consolidating Regional
+
+ for region_key, region_values in self.__obp_regional_checks.items():
+ # I assume you are well connected in all regions if find one region that is not it fails
+ if not region_values["Network_Connectivity"]["status"]:
+ self.obp_foundations_checks['Networking_Connectivity']['Status'] = False
+
+ self.obp_foundations_checks["Networking_Connectivity"]["Findings"] += region_values["Network_Connectivity"]["findings"]
+ self.obp_foundations_checks["Networking_Connectivity"]["OBP"] += region_values["Network_Connectivity"]["drgs"]
+
+ #######################################
+ # Cloud Guard Checks
+ #######################################
+ cloud_guard_record = {
+ "cloud_guard_endable": True if self.__cloud_guard_config_status == 'ENABLED' else False,
+ "target_at_root": False,
+ "targert_configuration_detector": False,
+ "targert_configuration_detector_customer_owned": False,
+ "target_activity_detector": False,
+ "target_activity_detector_customer_owned": False,
+ "target_threat_detector": False,
+ "target_threat_detector_customer_owned": False,
+ "target_responder_recipes": False,
+ "target_responder_recipes_customer_owned": False,
+ "target_responder_event_rule": False,
+ }
+
+ try:
+ # Cloud Guard Target attached to the root compartment with activity, config, and threat detector plus a responder
+ if self.__cloud_guard_targets[self.__tenancy.id]:
+
+ cloud_guard_record['target_at_root'] = True
+
+ if self.__cloud_guard_targets[self.__tenancy.id]:
+ if self.__cloud_guard_targets[self.__tenancy.id]['target_detector_recipes']:
+ for recipe in self.__cloud_guard_targets[self.__tenancy.id]['target_detector_recipes']:
+ if recipe.detector.upper() == 'IAAS_CONFIGURATION_DETECTOR':
+ cloud_guard_record['targert_configuration_detector'] = True
+ if recipe.owner.upper() == "CUSTOMER":
+ cloud_guard_record['targert_configuration_detector_customer_owned'] = True
+
+ elif recipe.detector.upper() == 'IAAS_ACTIVITY_DETECTOR':
+ cloud_guard_record['target_activity_detector'] = True
+ if recipe.owner.upper() == "CUSTOMER":
+ cloud_guard_record['target_activity_detector_customer_owned'] = True
+
+ elif recipe.detector.upper() == 'IAAS_THREAT_DETECTOR':
+ cloud_guard_record['target_threat_detector'] = True
+ if recipe.owner.upper() == "CUSTOMER":
+ cloud_guard_record['target_threat_detector_customer_owned'] = True
+
+ if self.__cloud_guard_targets[self.__tenancy.id]['target_responder_recipes']:
+ cloud_guard_record['target_responder_recipes'] = True
+ for recipe in self.__cloud_guard_targets[self.__tenancy.id]['target_responder_recipes']:
+ if recipe.owner.upper() == 'CUSTOMER':
+ cloud_guard_record['target_responder_recipes_customer_owned'] = True
+
+ for rule in recipe.effective_responder_rules:
+ if rule.responder_rule_id.upper() == 'EVENT' and rule.details.is_enabled:
+ cloud_guard_record['target_responder_event_rule'] = True
+
+ cloud_guard_record['target_id'] = self.__cloud_guard_targets[self.__tenancy.id]['id']
+ cloud_guard_record['target_name'] = self.__cloud_guard_targets[self.__tenancy.id]['display_name']
+
+ except Exception:
+ pass
+
+ all_cloud_guard_checks = True
+ for key, value in cloud_guard_record.items():
+ if not (value):
+ all_cloud_guard_checks = False
+
+ self.obp_foundations_checks['Cloud_Guard_Config']['Status'] = all_cloud_guard_checks
+ if all_cloud_guard_checks:
+ self.obp_foundations_checks['Cloud_Guard_Config']['OBP'].append(cloud_guard_record)
+ else:
+ self.obp_foundations_checks['Cloud_Guard_Config']['Findings'].append(cloud_guard_record)
+
+ ##########################################################################
+ # Orchestrates data collection and CIS report generation
+ ##########################################################################
+ def __report_generate_cis_report(self, level):
+ # This function reports generates CSV reportsffo
+
+ # Creating summary report
+ summary_report = []
+ for key, recommendation in self.cis_foundations_benchmark_1_2.items():
+ if recommendation['Level'] <= level:
+ report_filename = "cis" + " " + recommendation['section'] + "_" + recommendation['recommendation_#']
+ report_filename = report_filename.replace(" ", "_").replace(".", "-").replace("_-_", "_") + ".csv"
+ if recommendation['Status']:
+ compliant_output = "Yes"
+ elif recommendation['Status'] is None:
+ compliant_output = "Not Applicable"
+ else:
+ compliant_output = "No"
+ record = {
+ "Recommendation #": f"{key}",
+ "Section": recommendation['section'],
+ "Level": str(recommendation['Level']),
+ "Compliant": compliant_output if compliant_output != "Not Applicable" else "N/A",
+ "Findings": (str(len(recommendation['Findings'])) if len(recommendation['Findings']) > 0 else " "),
+ "Compliant Items": str(len(recommendation['Total']) - len(recommendation['Findings'])),
+ "Total": (str(len(recommendation['Total'])) if len(recommendation['Total']) > 0 else " "),
+ "Title": recommendation['Title'],
+ "CIS v8": recommendation['CISv8'],
+ "CCCS Guard Rail": recommendation['CCCS Guard Rail'],
+ "Filename": report_filename if len(recommendation['Findings']) > 0 else " ",
+ "Remediation": self.cis_report_data[key]['Remediation']
+ }
+ # Add record to summary report for CSV output
+ summary_report.append(record)
+
+ # Generate Findings report
+ # self.__print_to_csv_file("cis", recommendation['section'] + "_" + recommendation['recommendation_#'], recommendation['Findings'])
+
+ # Screen output for CIS Summary Report
+ print_header("CIS Foundations Benchmark 1.2 Summary Report")
+ print('Num' + "\t" + "Level " +
+ "\t" "Compliant" + "\t" + "Findings " + "\t" + "Total " + "\t\t" + 'Title')
+ print('#' * 90)
+ for finding in summary_report:
+ # If print_to_screen is False it will only print non-compliant findings
+ if not (self.__print_to_screen) and finding['Compliant'] == 'No':
+ print(finding['Recommendation #'] + "\t" +
+ finding['Level'] + "\t" + finding['Compliant'] + "\t\t" + finding['Findings'] + "\t\t" +
+ finding['Total'] + "\t\t" + finding['Title'])
+ elif self.__print_to_screen:
+ print(finding['Recommendation #'] + "\t" +
+ finding['Level'] + "\t" + finding['Compliant'] + "\t\t" + finding['Findings'] + "\t\t" +
+ finding['Total'] + "\t\t" + finding['Title'])
+
+ # Generating Summary report CSV
+ print_header("Writing CIS reports to CSV")
+ summary_file_name = self.__print_to_csv_file(
+ self.__report_directory, "cis", "summary_report", summary_report)
+
+ self.__report_generate_html_summary_report(
+ self.__report_directory, "cis", "html_summary_report", summary_report)
+
+ # Outputting to a bucket if I have one
+ if summary_file_name and self.__output_bucket:
+ self.__os_copy_report_to_object_storage(
+ self.__output_bucket, summary_file_name)
+
+ for key, recommendation in self.cis_foundations_benchmark_1_2.items():
+ if recommendation['Level'] <= level:
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "cis", recommendation['section'] + "_" + recommendation['recommendation_#'], recommendation['Findings'])
+ if report_file_name and self.__output_bucket:
+ self.__os_copy_report_to_object_storage(
+ self.__output_bucket, report_file_name)
+
+ ##########################################################################
+ # Generates an HTML report
+ ##########################################################################
+ def __report_generate_html_summary_report(self, report_directory, header, file_subject, data):
+ try:
+ # Creating report directory
+ if not os.path.isdir(report_directory):
+ os.mkdir(report_directory)
+
+ except Exception as e:
+ raise Exception(
+ "Error in creating report directory: " + str(e.args))
+
+ try:
+ # if no data
+ if len(data) == 0:
+ return None
+
+ # get the file name of the CSV
+
+ file_name = header + "_" + file_subject
+ file_name = (file_name.replace(" ", "_")).replace(".", "-").replace("_-_", "_") + ".html"
+ file_path = os.path.join(report_directory, file_name)
+
+ # add report_datetimeto each dictionary
+ result = [dict(item, extract_date=self.start_time_str)
+ for item in data]
+
+ # If this flag is set all OCIDs are Hashed to redact them
+ if self.__redact_output:
+ redacted_result = []
+ for item in result:
+ record = {}
+ for key in item.keys():
+ str_item = str(item[key])
+ items_to_redact = re.findall(self.__oci_ocid_pattern, str_item)
+ for redact_me in items_to_redact:
+ str_item = str_item.replace(redact_me, hashlib.sha256(str.encode(redact_me)).hexdigest())
+
+ record[key] = str_item
+
+ redacted_result.append(record)
+ # Overriding result with redacted result
+ result = redacted_result
+
+ # generate fields
+ fields = ['Recommendation #', 'Compliant', 'Section', 'Details']
+
+ html_title = 'CIS OCI Foundations Benchmark 1.2 - Compliance Report'
+ with open(file_path, mode='w') as html_file:
+ # Creating table header
+ html_file.write('')
+ html_file.write('' + html_title + '')
+ html_file.write("""
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
""")
+ html_file.write('
' + html_title.replace('-', '–') + '
')
+ html_file.write('
Tenancy Name: ' + self.__tenancy.name + '
')
+ # Get the extract date
+ r = result[0]
+ extract_date = r['extract_date'].replace('T',' ')
+ html_file.write('
')
+ # Creating appendix for the report
+ for finding in html_appendix:
+ fing = self.cis_foundations_benchmark_1_2[finding]
+ html_file.write(f'
{finding} – {fing["Title"]}
\n')
+ for item_key, item_value in self.cis_report_data[finding].items():
+ if item_value != "":
+ html_file.write(f"
{item_key.title()}
")
+ if item_key == 'Observation':
+ html_file.write(f"
{str(len(fing['Findings']))} of {str(len(fing['Total']))} {item_value}
\n")
+ else:
+ v = item_value.replace('
', '
')
+ html_file.write(f"
{v}
\n")
+ html_file.write("
\n")
+ # Closing HTML
+ html_file.write("""
+ """)
+ html_file.write("
\n")
+
+ print("HTML: " + file_subject.ljust(22) + " --> " + file_path)
+ # Used by Upload
+
+ return file_path
+
+ except Exception as e:
+ raise Exception("Error in report_generate_html_report: " + str(e.args))
+
+ ##########################################################################
+ # Orchestrates analysis and report generation
+ ##########################################################################
+ def __report_generate_obp_report(self):
+
+ obp_summary_report = []
+ # Screen output for CIS Summary Report
+ print_header("OCI Best Practices Findings")
+ print('Category' + "\t\t\t\t" + "Compliant" + "\t" + "Findings " + "\tBest Practices")
+ print('#' * 90)
+ # Adding data to summary report
+ for key, recommendation in self.obp_foundations_checks.items():
+ padding = str(key).ljust(25, " ")
+ print(padding + "\t\t" + str(recommendation['Status']) + "\t" + "\t" + str(len(recommendation['Findings'])) + "\t" + "\t" + str(len(recommendation['OBP'])))
+ record = {
+ "Recommendation": str(key),
+ "Compliant": ('Yes' if recommendation['Status'] else 'No'),
+ "OBP": (str(len(recommendation['OBP'])) if len(recommendation['OBP']) > 0 else " "),
+ "Findings": (str(len(recommendation['Findings'])) if len(recommendation['Findings']) > 0 else " "),
+ "Documentation": recommendation['Documentation']
+ }
+ obp_summary_report.append(record)
+
+ print_header("Writing Oracle Best Practices reports to CSV")
+
+ summary_report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "obp", "OBP_Summary", obp_summary_report)
+
+ if summary_report_file_name and self.__output_bucket:
+ self.__os_copy_report_to_object_storage(
+ self.__output_bucket, summary_report_file_name)
+
+ # Printing Findings to CSV
+ for key, value in self.obp_foundations_checks.items():
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "obp", key + "_Findings", value['Findings'])
+
+ # Printing OBPs to CSV
+ for key, value in self.obp_foundations_checks.items():
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "obp", key + "_Best_Practices", value['OBP'])
+
+ if report_file_name and self.__output_bucket:
+ self.__os_copy_report_to_object_storage(
+ self.__output_bucket, report_file_name)
+
+ ##########################################################################
+ # Coordinates calls of all the read function required for analyzing tenancy
+ ##########################################################################
+ def __collect_tenancy_data(self):
+
+ # Runs identity functions only in home region
+
+ thread_compartments = Thread(target=self.__identity_read_compartments)
+ thread_compartments.start()
+
+ thread_identity_groups = Thread(target=self.__identity_read_groups_and_membership)
+ thread_identity_groups.start()
+
+ thread_cloud_guard_config = Thread(target=self.__cloud_guard_read_cloud_guard_configuration)
+ thread_cloud_guard_config.start()
+
+ thread_compartments.join()
+ thread_cloud_guard_config.join()
+ thread_identity_groups.join()
+
+ print("\nProcessing Home Region resources...")
+
+ cis_home_region_functions = [
+ self.__identity_read_users,
+ self.__identity_read_tenancy_password_policy,
+ self.__identity_read_dynamic_groups,
+ self.__identity_read_domains,
+ self.__audit_read_tenancy_audit_configuration,
+ self.__identity_read_availability_domains,
+ self.__identity_read_tag_defaults,
+ self.__identity_read_tenancy_policies,
+ ]
+
+ # Budgets is global construct
+ if self.__obp_checks:
+ obp_home_region_functions = [
+ self.__budget_read_budgets,
+ self.__cloud_guard_read_cloud_guard_targets
+ ]
+ else:
+ obp_home_region_functions = []
+
+ # Threads for Home region checks
+ home_threads = []
+ for home_func in cis_home_region_functions + obp_home_region_functions:
+ t = Thread(target=home_func)
+ t.start()
+ home_threads.append(t)
+
+ # Waiting for home threads to complete
+ for t in home_threads:
+ t.join()
+
+ # The above checks are run in the home region
+ if self.__home_region not in self.__regions_to_run_in and not (self.__run_in_all_regions):
+ self.__regions.pop(self.__home_region)
+
+ print("\nProcessing regional resources...")
+ # Stores running threads
+
+ # List of functions for CIS
+ cis_regional_functions = [
+ self.__search_resources_in_root_compartment,
+ self.__vault_read_vaults,
+ self.__os_read_buckets,
+ self.__logging_read_log_groups_and_logs,
+ self.__events_read_event_rules,
+ self.__ons_read_subscriptions,
+ self.__network_read_network_security_lists,
+ self.__network_read_network_security_groups_rules,
+ self.__network_read_network_subnets,
+ self.__adb_read_adbs,
+ self.__oic_read_oics,
+ self.__oac_read_oacs,
+ self.__block_volume_read_block_volumes,
+ self.__boot_volume_read_boot_volumes,
+ self.__fss_read_fsss,
+ ]
+
+ # Oracle Best practice functions
+ if self.__obp_checks:
+ obp_functions = [
+ self.__network_read_fastonnects,
+ self.__network_read_ip_sec_connections,
+ self.__network_read_drgs,
+ self.__network_read_drg_attachments,
+ self.__sch_read_service_connectors,
+ ]
+ else:
+ obp_functions = []
+
+ def execute_function(func):
+ func()
+
+ with concurrent.futures.ThreadPoolExecutor(max_workers=6) as executor:
+ # Submit each function to the executor
+ futures = []
+ for func in cis_regional_functions + obp_functions:
+ futures.append(executor.submit(execute_function, func))
+
+ # Wait for all functions to complete
+ for future in concurrent.futures.as_completed(futures):
+ future.result()
+
+ ##########################################################################
+ # Generate Raw Data Output
+ ##########################################################################
+ def __report_generate_raw_data_output(self):
+
+ # List to store output reports if copying to object storage is required
+ list_report_file_names = []
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "identity_groups_and_membership", self.__groups_to_users)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "identity_domains", self.__identity_domains)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "identity_users", self.__users)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "identity_policies", self.__policies)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "identity_dynamic_groups", self.__dynamic_groups)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "identity_tags", self.__tag_defaults)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "identity_compartments", self.__raw_compartment)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "network_security_groups", self.__network_security_groups)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "network_security_lists", self.__network_security_lists)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "network_subnets", self.__network_subnets)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "autonomous_databases", self.__autonomous_databases)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "analytics_instances", self.__analytics_instances)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "integration_instances", self.__integration_instances)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "event_rules", self.__event_rules)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "log_groups_and_logs", self.__logging_list)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "object_storage_buckets", self.__buckets)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "boot_volumes", self.__boot_volumes)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "block_volumes", self.__block_volumes)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "file_storage_system", self.__file_storage_system)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "vaults_and_keys", self.__vaults)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "ons_subscriptions", self.__subscriptions)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "budgets", self.__budgets)
+ list_report_file_names.append(report_file_name)
+
+ # Converting a one to one dict to a list
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "service_connectors", list(self.__service_connectors.values()))
+ list_report_file_names.append(report_file_name)
+
+ # Converting a dict that is one to a list to a flat list
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "network_fastconnects", (list(itertools.chain.from_iterable(self.__network_fastconnects.values()))))
+ list_report_file_names.append(report_file_name)
+
+ # Converting a dict that is one to a list to a flat list
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "network_ipsec_connections", list(itertools.chain.from_iterable(self.__network_ipsec_connections.values())))
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "network_drgs", self.__raw_network_drgs)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "cloud_guard_target", list(self.__cloud_guard_targets.values()))
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "regions", self.__raw_regions)
+ list_report_file_names.append(report_file_name)
+
+ report_file_name = self.__print_to_csv_file(
+ self.__report_directory, "raw_data", "network_drg_attachments", list(itertools.chain.from_iterable(self.__network_drg_attachments.values())))
+ list_report_file_names.append(report_file_name)
+
+ if self.__output_bucket:
+ for raw_report in list_report_file_names:
+ if raw_report:
+ self.__os_copy_report_to_object_storage(
+ self.__output_bucket, raw_report)
+
+ ##########################################################################
+ # Copy Report to Object Storage
+ ##########################################################################
+ def __os_copy_report_to_object_storage(self, bucketname, filename):
+ object_name = filename
+ # print(self.__os_namespace)
+ try:
+ with open(filename, "rb") as f:
+ try:
+ self.__output_bucket_client.put_object(
+ self.__os_namespace, bucketname, object_name, f)
+ except Exception:
+ print("Failed to write " + object_name + " to bucket " + bucketname + ". Please check your bucket and IAM permissions.")
+
+ except Exception as e:
+ raise Exception(
+ "Error opening file os_copy_report_to_object_storage: " + str(e.args))
+
+ ##########################################################################
+ # Print to CSV
+ ##########################################################################
+ def __print_to_csv_file(self, report_directory, header, file_subject, data):
+ debug("__print_to_csv_file: " + header + "_" + file_subject)
+ try:
+ # Creating report directory
+ if not os.path.isdir(report_directory):
+ os.mkdir(report_directory)
+
+ except Exception as e:
+ raise Exception(
+ "Error in creating report directory: " + str(e.args))
+
+ try:
+ # if no data
+ if len(data) == 0:
+ return None
+
+ # get the file name of the CSV
+
+ file_name = header + "_" + file_subject
+ file_name = (file_name.replace(" ", "_")).replace(".", "-").replace("_-_", "_") + ".csv"
+ file_path = os.path.join(report_directory, file_name)
+
+ # add report_datetimeto each dictionary
+ result = [dict(item, extract_date=self.start_time_str)
+ for item in data]
+
+ # If this flag is set all OCIDs are Hashed to redact them
+ if self.__redact_output:
+ redacted_result = []
+ for item in result:
+ record = {}
+ for key in item.keys():
+ str_item = str(item[key])
+ items_to_redact = re.findall(self.__oci_ocid_pattern, str_item)
+ for redact_me in items_to_redact:
+ str_item = str_item.replace(redact_me, hashlib.sha256(str.encode(redact_me)).hexdigest())
+
+ record[key] = str_item
+
+ redacted_result.append(record)
+ # Overriding result with redacted result
+ result = redacted_result
+
+ # generate fields
+ fields = [key for key in result[0].keys()]
+
+ with open(file_path, mode='w', newline='') as csv_file:
+ writer = csv.DictWriter(csv_file, fieldnames=fields)
+
+ # write header
+ writer.writeheader()
+
+ for row in result:
+ writer.writerow(row)
+ # print(row)
+
+ print("CSV: " + file_subject.ljust(22) + " --> " + file_path)
+ # Used by Upload
+
+ return file_path
+
+ except Exception as e:
+ raise Exception("Error in print_to_csv_file: " + str(e.args))
+
+ ##########################################################################
+ # Orchestrates Data collection and reports
+ ##########################################################################
+ def generate_reports(self, level=2):
+
+ # Collecting all the tenancy data
+ self.__collect_tenancy_data()
+
+ # Analyzing Data for CIS reports
+ self.__report_cis_analyze_tenancy_data()
+
+ # Generate CIS reports
+ self.__report_generate_cis_report(level)
+
+ if self.__obp_checks:
+ # Analyzing Data for OBP reports
+ self.__obp_analyze_tenancy_data()
+ self.__report_generate_obp_report()
+
+ if self.__output_raw_data:
+ self.__report_generate_raw_data_output()
+
+ if self.__errors:
+ error_report = self.__print_to_csv_file(
+ self.__report_directory, "error", "report", self.__errors)
+
+ if self.__output_bucket:
+ if error_report:
+ self.__os_copy_report_to_object_storage(
+ self.__output_bucket, error_report)
+
+ end_datetime = datetime.datetime.now().replace(tzinfo=pytz.UTC)
+ end_time_str = str(end_datetime.strftime("%Y-%m-%dT%H:%M:%S"))
+
+ print_header("Finished at " + end_time_str + ", duration: " + str(end_datetime - self.start_datetime))
+
+ return self.__report_directory
+
+ def get_obp_checks(self):
+ self.__obp_checks = True
+ self.generate_reports()
+ return self.obp_foundations_checks
+
+ ##########################################################################
+ # Create CSV Hyperlink
+ ##########################################################################
+ def __generate_csv_hyperlink(self, url, name):
+ if len(url) < 255:
+ return '=HYPERLINK("' + url + '","' + name + '")'
+ else:
+ return url
+
+
+##########################################################################
+# check service error to warn instead of error
+##########################################################################
+def check_service_error(code):
+ return ('max retries exceeded' in str(code).lower() or
+ 'auth' in str(code).lower() or
+ 'notfound' in str(code).lower() or
+ code == 'Forbidden' or
+ code == 'TooManyRequests' or
+ code == 'IncorrectState' or
+ code == 'LimitExceeded')
+
+
+##########################################################################
+# Create signer for Authentication
+# Input - config_profile and is_instance_principals and is_delegation_token
+# Output - config and signer objects
+##########################################################################
+def create_signer(file_location, config_profile, is_instance_principals, is_delegation_token, is_security_token):
+
+ # if instance principals authentications
+ if is_instance_principals:
+ try:
+ signer = oci.auth.signers.InstancePrincipalsSecurityTokenSigner()
+ config = {'region': signer.region, 'tenancy': signer.tenancy_id}
+ return config, signer
+
+ except Exception:
+ print("Error obtaining instance principals certificate, aborting")
+ raise SystemExit
+
+ # -----------------------------
+ # Delegation Token
+ # -----------------------------
+ elif is_delegation_token:
+
+ try:
+ # check if env variables OCI_CONFIG_FILE, OCI_CONFIG_PROFILE exist and use them
+ env_config_file = os.environ.get('OCI_CONFIG_FILE')
+ env_config_section = os.environ.get('OCI_CONFIG_PROFILE')
+
+ # check if file exist
+ if env_config_file is None or env_config_section is None:
+ print(
+ "*** OCI_CONFIG_FILE and OCI_CONFIG_PROFILE env variables not found, abort. ***")
+ print("")
+ raise SystemExit
+
+ config = oci.config.from_file(env_config_file, env_config_section)
+ delegation_token_location = config["delegation_token_file"]
+
+ with open(delegation_token_location, 'r') as delegation_token_file:
+ delegation_token = delegation_token_file.read().strip()
+ # get signer from delegation token
+ signer = oci.auth.signers.InstancePrincipalsDelegationTokenSigner(
+ delegation_token=delegation_token)
+
+ return config, signer
+
+ except KeyError:
+ print("* Key Error obtaining delegation_token_file")
+ raise SystemExit
+
+ except Exception:
+ raise
+ # ---------------------------------------------------------------------------
+ # Security Token - Credit to Dave Knot (https://github.com/dns-prefetch)
+ # ---------------------------------------------------------------------------
+ elif is_security_token:
+
+ try:
+ # Read the token file from the security_token_file parameter of the .config file
+ config = oci.config.from_file(
+ oci.config.DEFAULT_LOCATION,
+ (config_profile if config_profile else oci.config.DEFAULT_PROFILE)
+ )
+
+ token_file = config['security_token_file']
+ token = None
+ with open(token_file, 'r') as f:
+ token = f.read()
+
+ # Read the private key specified by the .config file.
+ private_key = oci.signer.load_private_key_from_file(config['key_file'])
+
+ signer = oci.auth.signers.SecurityTokenSigner(token, private_key)
+
+ return config, signer
+
+ except KeyError:
+ print("* Key Error obtaining security_token_file")
+ raise SystemExit
+
+ except Exception:
+ raise
+
+ # -----------------------------
+ # config file authentication
+ # -----------------------------
+ else:
+
+ try:
+ config = oci.config.from_file(
+ file_location if file_location else oci.config.DEFAULT_LOCATION,
+ (config_profile if config_profile else oci.config.DEFAULT_PROFILE)
+ )
+ signer = oci.signer.Signer(
+ tenancy=config["tenancy"],
+ user=config["user"],
+ fingerprint=config["fingerprint"],
+ private_key_file_location=config.get("key_file"),
+ pass_phrase=oci.config.get_config_value_or_default(
+ config, "pass_phrase"),
+ private_key_content=config.get("key_content")
+ )
+ return config, signer
+ except Exception:
+ print(
+ f'** OCI Config was not found here : {oci.config.DEFAULT_LOCATION} or env varibles missing, aborting **')
+ raise SystemExit
+
+
+##########################################################################
+# Arg Parsing function to be updated
+##########################################################################
+def set_parser_arguments():
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument(
+ '-i',
+ type=argparse.FileType('r'),
+ dest='input',
+ help="Input JSON File"
+ )
+ parser.add_argument(
+ '-o',
+ type=argparse.FileType('w'),
+ dest='output_csv',
+ help="CSV Output prefix")
+ result = parser.parse_args()
+
+ if len(sys.argv) < 3:
+ parser.print_help()
+ return None
+
+ return result
+
+
+##########################################################################
+# execute_report
+##########################################################################
+def execute_report():
+
+ # Get Command Line Parser
+ parser = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog, max_help_position=100, width=180))
+ parser.add_argument('-c', default="", dest='file_location',
+ help='OCI config file location')
+ parser.add_argument('-t', default="", dest='config_profile',
+ help='Config file section to use (tenancy profile) ')
+ parser.add_argument('-p', default="", dest='proxy',
+ help='Set Proxy (i.e. www-proxy-server.com:80) ')
+ parser.add_argument('--output-to-bucket', default="", dest='output_bucket',
+ help='Set Output bucket name (i.e. my-reporting-bucket) ')
+ parser.add_argument('--report-directory', default=None, dest='report_directory',
+ help='Set Output report directory by default it is the current date (i.e. reports-date) ')
+ parser.add_argument('--print-to-screen', default='True', dest='print_to_screen',
+ help='Set to False if you want to see only non-compliant findings (i.e. False) ')
+ parser.add_argument('--level', default=2, dest='level',
+ help='CIS Recommendation Level options are: 1 or 2. Set to 2 by default ')
+ parser.add_argument('--regions', default="", dest='regions',
+ help='Regions to run the compliance checks on, by default it will run in all regions. Sample input: us-ashburn-1,ca-toronto-1,eu-frankfurt-1')
+ parser.add_argument('--raw', action='store_true', default=False,
+ help='Outputs all resource data into CSV files')
+ parser.add_argument('--obp', action='store_true', default=False,
+ help='Checks for OCI best practices')
+ parser.add_argument('--redact_output', action='store_true', default=False,
+ help='Redacts OCIDs in output CSV files')
+ parser.add_argument('-ip', action='store_true', default=False,
+ dest='is_instance_principals', help='Use Instance Principals for Authentication ')
+ parser.add_argument('-dt', action='store_true', default=False,
+ dest='is_delegation_token', help='Use Delegation Token for Authentication in Cloud Shell')
+ parser.add_argument('-st', action='store_true', default=False,
+ dest='is_security_token', help='Authenticate using Security Token')
+ parser.add_argument('-v', action='store_true', default=False,
+ dest='version', help='Show the version of the script and exit.')
+ parser.add_argument('--debug', action='store_true', default=False,
+ dest='debug', help='Enables debugging messages. This feature is in beta')
+ cmd = parser.parse_args()
+
+ if cmd.version:
+ show_version()
+ sys.exit()
+
+ config, signer = create_signer(cmd.file_location, cmd.config_profile, cmd.is_instance_principals, cmd.is_delegation_token, cmd.is_security_token)
+ config['retry_strategy'] = oci.retry.DEFAULT_RETRY_STRATEGY
+ report = CIS_Report(config, signer, cmd.proxy, cmd.output_bucket, cmd.report_directory, cmd.print_to_screen, \
+ cmd.regions, cmd.raw, cmd.obp, cmd.redact_output, debug=cmd.debug)
+ csv_report_directory = report.generate_reports(int(cmd.level))
+
+ try:
+ if OUTPUT_TO_XLSX:
+ workbook = Workbook(csv_report_directory + '/Consolidated_Report.xlsx', {'in_memory': True})
+ for csvfile in glob.glob(csv_report_directory + '/*.csv'):
+
+ worksheet_name = csvfile.split(os.path.sep)[-1].replace(".csv", "").replace("raw_data_", "raw_").replace("Findings", "fds").replace("Best_Practices", "bps")
+
+ if "Identity_and_Access_Management" in worksheet_name:
+ worksheet_name = worksheet_name.replace("Identity_and_Access_Management", "IAM")
+ elif "Storage_Object_Storage" in worksheet_name:
+ worksheet_name = worksheet_name.replace("Storage_Object_Storage", "Object_Storage")
+ elif "raw_identity_groups_and_membership" in worksheet_name:
+ worksheet_name = worksheet_name.replace("raw_identity", "raw_iam")
+ elif "Cost_Tracking_Budgets_Best_Practices" in worksheet_name:
+ worksheet_name = worksheet_name.replace("Cost_Tracking_", "")
+ elif "Storage_File_Storage_Service" in worksheet_name:
+ worksheet_name = worksheet_name.replace("Storage_File_Storage_Service", "FSS")
+ elif "raw_cloud_guard_target" in worksheet_name:
+ # cloud guard targets are too large for a cell
+ continue
+ elif len(worksheet_name) > 31:
+ worksheet_name = worksheet_name.replace("_", "")
+
+ worksheet = workbook.add_worksheet(worksheet_name)
+ with open(csvfile, 'rt', encoding='unicode_escape') as f:
+ reader = csv.reader(f)
+ for r, row in enumerate(reader):
+ for c, col in enumerate(row):
+ # Skipping the deep link due to formating errors in xlsx
+ if "=HYPERLINK" not in col:
+ worksheet.write(r, c, col)
+ workbook.close()
+ except Exception as e:
+ print("**Failed to output to excel. Please use CSV files.**")
+ print(e)
+
+
+##########################################################################
+# Main
+##########################################################################
+if __name__ == "__main__":
+ execute_report()
diff --git a/cd3_automation_toolkit/commonTools.py b/cd3_automation_toolkit/commonTools.py
index b24659182..c46d0cfe6 100644
--- a/cd3_automation_toolkit/commonTools.py
+++ b/cd3_automation_toolkit/commonTools.py
@@ -20,6 +20,7 @@
import re
import json as simplejson
import warnings
+import threading
warnings.simplefilter("ignore")
def data_frame(filename,sheetname):
@@ -46,6 +47,21 @@ def __init__(self):
self.region_dict={}
self.protocol_dict={}
self.sheet_dict={}
+ self.reg_filter = None
+ self.comp_filter = None
+ self.default_dns = None
+ self.ins_pattern_filter = None
+ self.ins_ad_filter = None
+ self.bv_pattern_filter = None
+ self.bv_ad_filter = None
+ self.orm_reg_filter = None
+ self.orm_comp_filter = None
+ self.vault_region = None
+ self.vault_comp = None
+ self.budget_amount = None
+ self.budget_threshold = None
+ self.cg_region = None
+
# When called from wthin OCSWorkVM or user-scripts
dir=os.getcwd()
@@ -81,12 +97,82 @@ def __init__(self):
if ("OCSWorkVM" in dir):
os.chdir(dir)
#os.chdir(dir)
+ # Get Export filters
+ def get_export_filters(self,export_filters):
+ for i in export_filters:
+ if 'reg_filter' in i:
+ self.reg_filter = (i.split("=")[1])[2:][:-2]
+ if 'comp_filter' in i:
+ self.comp_filter = (i.split("=")[1])[2:][:-2]
+ self.comp_filter = self.comp_filter if self.comp_filter else "null"
+ if 'default_dns' in i:
+ self.default_dns = (i.split("=")[1])[2:][:-2]
+
+ if 'ins_pattern_filter' in i:
+ self.ins_pattern_filter = (i.split("=")[1])[2:][:-2]
+
+ if 'ins_ad_filter' in i:
+ self.ins_ad_filter = (i.split("=")[1])[2:][:-2]
+
+ if 'bv_pattern_filter' in i:
+ self.bv_pattern_filter = (i.split("=")[1])[2:][:-2]
+
+ if 'bv_ad_filter' in i:
+ self.bv_ad_filter = (i.split("=")[1])[2:][:-2]
+
+ if 'orm_region' in i:
+ self.orm_reg_filter = (i.split("=")[1])[2:][:-2]
+
+ if 'orm_compartments' in i:
+ self.orm_comp_filter = (i.split("=")[1])[2:][:-2]
+ self.orm_comp_filter = self.orm_comp_filter if self.orm_comp_filter else "null"
+ if 'vault_region' in i:
+ self.vault_region = (i.split("=")[1])[2:][:-2]
+
+ if 'vault_comp' in i:
+ self.vault_comp = (i.split("=")[1])[2:][:-2]
+
+ if 'budget_amount' in i:
+ self.budget_amount = (i.split("=")[1])[2:][:-2]
+
+ if 'budget_threshold' in i:
+ self.budget_threshold = (i.split("=")[1])[2:][:-2]
+
+ if 'cg_region' in i:
+ self.cg_region = (i.split("=")[1])[2:][:-2]
+
+ # OCI API Authentication
+ def authenticate(self,auth_mechanism,config_file_path=DEFAULT_LOCATION):
+ signer = None
+
+ try:
+ config = oci.config.from_file(file_location=config_file_path)
+ except Exception as e:
+ print(str(e))
+ print(".....Exiting!!!")
+ exit(0)
+
+ if auth_mechanism == 'api_key':
+ signer = oci.signer.Signer(config['tenancy'],config['user'],config['fingerprint'],config['key_file'])
+ elif auth_mechanism == 'session_token':
+ token_file = config['security_token_file']
+ token = None
+ with open(token_file, 'r') as f:
+ token = f.read()
+
+ private_key = oci.signer.load_private_key_from_file(config['key_file'])
+ signer = oci.auth.signers.SecurityTokenSigner(token, private_key)
+ elif auth_mechanism == 'instance_principal':
+ signer = oci.auth.signers.InstancePrincipalsSecurityTokenSigner()
+
+ return config,signer
+
#Get Tenancy Regions
- def get_subscribedregions(self, configFileName=DEFAULT_LOCATION):
+ def get_subscribedregions(self, config,signer):
#Get config client
- config = oci.config.from_file(file_location=configFileName)
- idc = IdentityClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ #config = oci.config.from_file(file_location=configFileName)
+ idc = IdentityClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
regionsubscriptions = idc.list_region_subscriptions(tenancy_id=config['tenancy'])
homeregion=""
for rs in regionsubscriptions.data:
@@ -109,15 +195,11 @@ def get_subscribedregions(self, configFileName=DEFAULT_LOCATION):
return subs_region_list
#Get Compartment OCIDs
- def get_network_compartment_ids(self,c_id, c_name,configFileName):
+ def get_network_compartment_ids(self,c_id, c_name,config, signer):
# Get config client
- if configFileName == "":
- config = oci.config.from_file()
- else:
- config = oci.config.from_file(file_location=configFileName)
tenancy_id=config['tenancy']
- idc = IdentityClient(config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ idc = IdentityClient(config=config,retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
compartments = oci.pagination.list_call_get_all_results(idc.list_compartments,compartment_id=c_id, compartment_id_in_subtree=False)
for c in compartments.data:
@@ -137,7 +219,7 @@ def get_network_compartment_ids(self,c_id, c_name,configFileName):
if (c_details.compartment_id != tenancy_id):
self.ntk_compartment_ids.pop(c.name)
- self.get_network_compartment_ids(c.id, name,configFileName)
+ self.get_network_compartment_ids(c.id, name,config,signer)
self.ntk_compartment_ids["root"]=tenancy_id
del tenancy_id
@@ -181,7 +263,10 @@ def get_compartment_map(self, var_file, resource_name):
input_compartment_names = None
else:
compartment_list_str = "Enter name of Compartment as it appears in OCI (comma separated without spaces if multiple)for which you want to export {};\nPress 'Enter' to export from all the Compartments: "
- compartments = input(compartment_list_str.format(resource_name))
+ if self.comp_filter == "null":
+ compartments = None
+ else:
+ compartments = self.comp_filter if self.comp_filter else input(compartment_list_str.format(resource_name))
input_compartment_names = list(map(lambda x: x.strip(), compartments.split(','))) if compartments else None
comp_list_fetch = []
@@ -404,7 +489,7 @@ def read_cd3(cd3file, sheet_name):
except Exception as e:
print(str(e))
print("Exiting!!")
- exit()
+ exit(1)
values_for_column = collections.OrderedDict()
# values_for_column={}
@@ -422,7 +507,7 @@ def write_to_cd3(values_for_column, cd3file, sheet_name):
except Exception as e:
print(str(e))
print("Exiting!!")
- exit()
+ exit(1)
if (sheet_name == "VCN Info"):
onprem_destinations = ""
ngw_destinations = ""
@@ -450,7 +535,7 @@ def write_to_cd3(values_for_column, cd3file, sheet_name):
except Exception as e:
print(str(e))
print("Exiting!!")
- exit()
+ exit(1)
return
@@ -469,7 +554,7 @@ def write_to_cd3(values_for_column, cd3file, sheet_name):
except Exception as e:
print(str(e))
print("Exiting!!")
- exit()
+ exit(1)
return
sheet_max_rows = sheet.max_row
@@ -548,7 +633,7 @@ def write_to_cd3(values_for_column, cd3file, sheet_name):
except Exception as e:
print(str(e))
print("Exiting!!")
- exit()
+ exit(1)
# def backup_file(src_dir, pattern, overwrite):
def backup_file(src_dir, resource, pattern):
@@ -863,7 +948,7 @@ def section(title='', header=False, padding=117):
print(separator * padding)
-def exit_menu(msg, exit_code=0):
+def exit_menu(msg, exit_code=1):
print(msg)
exit(exit_code)
@@ -936,18 +1021,19 @@ def __init__(self, filename):
class cd3Services():
+
#Get OCI Cloud Regions
regions_list = ""
- def fetch_regions(self,configFileName=DEFAULT_LOCATION):
- config = oci.config.from_file(file_location=configFileName)
- idc = IdentityClient(config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY)
+ def fetch_regions(self,config,signer):
+ #config = oci.config.from_file(file_location=configFileName)
+ idc = IdentityClient(config=config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,signer=signer)
try:
regions_list = idc.list_regions().data
except Exception as e:
print(e)
if ('NotAuthenticated' in str(e)):
print("\nInvalid Credetials - check your keypair/fingerprint/region...Exiting!!!")
- exit()
+ exit(1)
if ("OCSWorkVM" in os.getcwd() or 'user-scripts' in os.getcwd()):
os.chdir("../")
diff --git a/cd3_automation_toolkit/documentation/user_guide/Auth_Mechanisms_in_OCI.md b/cd3_automation_toolkit/documentation/user_guide/Auth_Mechanisms_in_OCI.md
new file mode 100644
index 000000000..7face1540
--- /dev/null
+++ b/cd3_automation_toolkit/documentation/user_guide/Auth_Mechanisms_in_OCI.md
@@ -0,0 +1,64 @@
+# OCI SDK Authentication Methods
+Choose one of the below authentication mechansims to be used for the toolkit execution -
+
+- [API key-based authentication](#api-key-based-authentication)
+- [Session token-based authentication](#session-token-based-authentication)
+- [Instance principal](#instance-principal)
+
+## API key-based authentication
+Follow below steps to use API key-based authentication -
+1. Create API PEM Key - RSA key pair in PEM format (minimum 2048 bits) is needed to use OCI APIs.
+
+ If the key pair does not exist, create them using below command inside docker container:
+ ```cd /cd3user/oci_tools/cd3_automation_toolkit/user-scripts/```
+ ```python createAPIKey.py```
+
+→ This will generate the public/private key pair (oci_api_public.pem and oci_api_private.pem) at /cd3user/tenancies/keys/
+
+ In case you already have the keys, you can copy the private key file inside the container at /cd3user/tenancies/keys/
+
+2. Upload Public Key
+
+ Upload the Public key to "APIkeys" under user settings in OCI Console.
+ - Open the Console, and sign in as the user.
+ - View the details for the user who will be calling the API with the key pair.
+ - Open the Profile menu (User menu icon) and click User Settings.
+ - Click Add Public Key.
+ - Paste the contents of the PEM public key in the dialog box and click Add.
+
+ > Note
+ > * Please note down these details for next step - User OCID, Private Key path, Fingerprint, Tenancy OCID. The User should have administrator access to the tenacy to use complete functionality of the toolkit.
+
+## Session token-based authentication
+Follow below steps to use Session token-based authentication -
+1. Use below command to create config inside the container. This is needed to generate session token. You can skip this step, if you already have a valid config(with API key) and uploaded the public key to OCI for a user. In that case, you can copy the config file and private API Key inside the container at /cd3user/.oci
+ ```oci setup config```
+
+
+
+2. Execute ```oci session authenticate --no-browser``` to generate session token for the private key.
+ Follow the questions. Enter 'DEFAULT' for the profile name and proceed to update the config file with session token information at default location /cd3user/.oci
+
+
+3. Token will be generated at default location /cd3user/.oci
+
+
+
+> Note
+> * createTenancyConfig.py script will use the config file located at /cd3user/.oci path. And toolkit supports profile name as DEFAULT only.
+> * Generated session token will have maximum 60 minutes validity. You will have to follow from step 1 if new session token is required after expiry. The User should have administrator access to the tenacy to use complete functionlaity of the toolkit.
+
+## Instance principal
+Follow below steps to use Instance Principal authentication -
+1. Launch and Instance in the tenancy and set up the toolkit docker container on that instance.
+2. Create Dynamic Group for this instance.
+3. Write IAM policy to assign privileges to this dynamic group. The dynamic group(conytaining the instance) should have administrator access to the tenacy to use complete functionality of the toolkit.
+
+
+
diff --git a/cd3_automation_toolkit/documentation/user_guide/ComputeGF.md b/cd3_automation_toolkit/documentation/user_guide/ComputeGF.md
index 2fd3cf3fe..0df99ea68 100755
--- a/cd3_automation_toolkit/documentation/user_guide/ComputeGF.md
+++ b/cd3_automation_toolkit/documentation/user_guide/ComputeGF.md
@@ -113,7 +113,7 @@ On re-running the same option you will find the previously existing files being
diff --git a/cd3_automation_toolkit/documentation/user_guide/ComputeNGF.md b/cd3_automation_toolkit/documentation/user_guide/ComputeNGF.md
index a5b53a1c9..c374d46ae 100644
--- a/cd3_automation_toolkit/documentation/user_guide/ComputeNGF.md
+++ b/cd3_automation_toolkit/documentation/user_guide/ComputeNGF.md
@@ -1,4 +1,4 @@
-# Managing Compute Instances for Non-Greenfield tenancies
+# Exporting Compute Instances from OCI
Follow the below steps to export OCI compute Instances to CD3 Excel file and create the terraform state:
@@ -25,7 +25,14 @@ Follow the below steps to export OCI compute Instances to CD3 Excel file and cre
10. The associated ssh public keys are placed under variables_.tf under the "instance_ssh_keys" variable.
11. While export of instances, it will fetch details for only the primary VNIC attached to the instance.
12. Execute the .sh file ( *sh tf_import_commands_instances_nonGF.sh*) to generate terraform state file.
+13. Please [read](/cd3_automation_toolkit/documentation/user_guide/KnownBehaviour.md#8) the known behaviour of toolkit for export of instances having multiple plugins.
-
+
diff --git a/cd3_automation_toolkit/documentation/user_guide/Connect_container_to_OCI_Tenancy.md b/cd3_automation_toolkit/documentation/user_guide/Connect_container_to_OCI_Tenancy.md
index a61d94ce2..31beee124 100644
--- a/cd3_automation_toolkit/documentation/user_guide/Connect_container_to_OCI_Tenancy.md
+++ b/cd3_automation_toolkit/documentation/user_guide/Connect_container_to_OCI_Tenancy.md
@@ -1,89 +1,70 @@
-# Connect Docker container to OCI Tenancy
+# Connect container to OCI Tenancy
-
-> ***Same container can be connected to multiple OCI tenancies. Repeat this process for every new OCI tenancy.***
+> [!Important]
+> * It is recommended to execute createTenancyConfig.py with a single within that container. Even if it is run multiple times with different customer names, Jenkins will only be configured for used while first time successful execution of the script.
+> * If there is a new region subscription to the tenancy at a later stage of time, createTenancyConfig.py must be re-run by using the same tenancyconfig.properties file that was originally used to create the configuration. Re-execution will create new directory for the new region under `/cd3user/tenancies//terraform_files` without touching the existing ones and will commit the latest terraform_files folder to DevOps GIT repo.
### **Step 1 - Exec into the Container**:
* Run ```docker ps```.
→ Note down the container ID from this cmd output.
* Run ```docker exec -it bash```
- Change Directory to 'user-scripts'
- ```cd /cd3user/oci_tools/cd3_automation_toolkit/user-scripts/```
-
-### **Step 2 - Create API PEM Key**:
-RSA key pair in PEM format (minimum 2048 bits) is needed to use OCI APIs. If the key pair does not exist, create them using the below command:
- ```python createAPIKey.py```
- → This will generate the public/private key pair (***_oci_api_public.pem_*** and ***_oci_api_private.pem_***) at **_/cd3user/tenancies/keys/_**
- → In case you already have the keys, you should copy the private key file inside the container and rename it to **_oci_api_private.pem_**.
-
-### **Step 3 - Upload the Public key**:
-Upload the Public key to **"APIkeys"** under user settings in OCI Console. Pre-requisite to use the complete functionality of the Automation Toolkit is to have the user as an administrator to the tenancy.
-- Open the Console, and sign in as the user.
- View the details for the user who will be calling the API with the key pair.
-- Open the Profile menu (User menu icon) and click User Settings.
-- Click Add Public Key.
Paste the contents of the PEM public key in the dialog box and click Add.
-
-### **Step 4 - Edit tenancyconfig.properties**:
-Enter the details to **tenancyconfig.properties** file. Please make sure to review 'outdir_structure_file' parameter as per requirements. It is recommended to use seperate outdir structure in case the tenancy has large number of objects.
-```
-[Default]
-# Mandatory Fields
-# Friendly name for the Customer Tenancy eg: demotenancy;
-# The generated .auto.tfvars will be prefixed with this customer name
-customer_name=
-tenancy_ocid=
-fingerprint=
-user_ocid=
-
-# Path of API Private Key (PEM Key) File; If the PEM keys were generated by running createAPI.py, leave this field empty.
-# Defaults to /cd3user/tenancies/keys/oci_api_private.pem when left empty.
-key_path=
-
-# Region ; defaults to us-ashburn-1 when left empty.
-region=
-# The outdir_structure_file defines the grouping of the terraform auto.tf.vars for the various generated resources.
-# To have all the files generated in the corresponding region, leave this variable blank.
-# To group resources into different directories within each region - specify the absolute path to the file.
-# The default file is specified below. You can make changes to the grouping in the below file to suit your deployment"
-outdir_structure_file=
-#or
-#outdir_structure_file=/cd3user/oci_tools/cd3_automation_toolkit/user-scripts/outdir_structure_file.properties
-
-# Optional Fields
-# SSH Key to launched instances
-ssh_public_key=
-
-```
-### **Step 5 - Initialise the environment**:
-Initialise your environment to use the Automation Toolkit.
+### **Step 2 - Choose Authentication Mechanism for OCI SDK**
+* Please click [here](/cd3_automation_toolkit/documentation/user_guide/Auth_Mechanisms_in_OCI.md) to configure any one of the available authentication mechanisms.
+
+### **Step 3 - Edit tenancyconfig.properties**:
+* Run ```cd /cd3user/oci_tools/cd3_automation_toolkit/user-scripts/```
+* Fill the input parameters in **tenancyconfig.properties** file.
+* Ensure to:
+ - Have the details ready for the Authentication mechanism you are planning to use.
+ - Use the same customer_name for a tenancy even if the script needs to be executed multiple times.
+ - Review **'outdir_structure_file'** parameter as per requirements. It is recommended to use seperate outdir structure to manage
+ a large number of resources.
+ - Review Advanced Parameters Section for CI/CD setup and be ready with user details that will be used to connect to DevOps Repo in OCI. Specifying these parameters as **'yes'** in properties file will create Object Storage Bucket and Devops Git Repo/Project/Topic in OCI
+ and enable toolkit usage via Jenkins.
+ > The toolkit supports users in primary IDCS stripes or default domains only for DevOps GIT operations.
+
+
+### **Step 4 - Initialise the environment**:
+* Initialise your environment to use the Automation Toolkit.
```python createTenancyConfig.py tenancyconfig.properties```
-**Note** - If the API Keys were generated and added to the OCI console using previous steps, it might take a couple of seconds to reflect. Thus, running the above command immediately might result in Authentication Errors. In such cases, please retry after a minute.
+> Note
+> * If you are running docker container on a linux VM host, please refer to [point no. 7](/cd3_automation_toolkit/documentation/user_guide/FAQ.md) under FAQ to avoid any permission issues.
+> * Running the above command immediately after adding API key to the user profile in OCI might result in Authentication Errors. In such cases, please retry after a minute.
-→ Example execution of the script:
- ![image](https://user-images.githubusercontent.com/103508105/221942089-5c52b221-96f1-4a73-9a10-46159ae4a75c.png)
+→ Example execution of the script with Advanced Parameters for CI/CD:
+
+
## Appendix
-→ Files created on successful execution of above steps - Description of the Generated files:
+
+ Expand this to view the details of the files created on successful execution of above steps
| Files Generated | At File Path | Comment/Purpose |
| --------------- | ------------ | --------------- |
-| Config File | ```/cd3user/tenancies//_config``` | Customer specific Config file is required for OCI API calls. |
-| setUpOCI.properties | ```/cd3user/tenancies//_setUpOCI.properties``` | Customer Specific properties files will be created. |
-| outdir_structure_file | ```/cd3user/tenancies//_outdir_structure_file``` | Customer Specific properties file for outdir structure. This file will not be generated if 'outdir_structure_file' parameter was set to empty(single outdir) in tenancyconfig.properties while running createTenancy.py |
+| setUpOCI.properties | ```/cd3user/tenancies//_setUpOCI.properties``` | Customer Specific properties |
+| outdir_structure_file.properties | ```/cd3user/tenancies//_outdir_structure_file``` | Customer Specific properties file for outdir structure. This file will not be generated if 'outdir_structure_file' parameter was set to empty(single outdir) in tenancyconfig.properties while running createTenancyConfig.py |
| Region based directories | ```/cd3user/tenancies//terraform_files``` | Tenancy's subscribed regions based directories for the generation of terraform files. Each region directory will contain individual directory for each service based on the parameter 'outdir_structure_file' |
-| Variables File,Provider File, Root and Sub modules | ```/cd3user/tenancies//terraform_files/``` | Required for terraform to work. |
-| Public and Private Key Pair | Copied from ```/cd3user/tenancies/keys/``` to ```/cd3user/tenancies//``` | API Keys that were previously generated are moved to customer specific out directory locations for easy access. |
-| A log file with the commands to execute | ```/cd3user/tenancies//cmds.log``` | This file contains a copy of the Commands to execute section of the console output. |
+| Variables File,Provider File, Root and Sub terraform modules | ```/cd3user/tenancies//terraform_files/``` | Required for terraform to work. Variables file and Provider file will be genrated based on authentication mechanism chosen.|
+| out file | ```/cd3user/tenancies//createTenancyConfig.out``` | This file contains a copy of information displayed as the console output. |
+| OCI Config File | ```/cd3user/tenancies//.config_files/_oci_config``` | Customer specific Config file for OCI API calls. This will have data based on authentication mechanism chosen. |
+| Public and Private Key Pair | Copied from ```/cd3user/tenancies/keys/``` to ```/cd3user/tenancies//.config_files``` | API Key for authentication mechanism as API_Key are copied to customer specific out directory locations for easy access. |
+| GIT Config File | ```/cd3user/tenancies//.config_files/_git_config``` | Customer specific GIT Config file for OCI Dev Ops GIT operations. This is generated only if use_oci_devops_git is set to yes |
+| S3 Credentials File | ```/cd3user/tenancies//.config_files/_s3_credentials``` | This file contains access key and secret for S3 compatible OS bucket to manage remote terraform state. This is generated only if use_remote_state is set to yes |
+| Jenkins Home | ```/cd3user/tenancies/jenkins_home``` | This folder contains jenkins specific data. ```Single Jenkins instance can be setup for a single container.```|
+| tenancyconfig.properties | ```/cd3user/tenancies//.config_files/_tenancyconfig.properties``` | The input properties file used to execute the script is copied to custome folder to retain for future reference. This can be used when the script needs to be re-run with same parameters at later stage.|
+
+The next pages will guide you to use the toolkit either via CLI or via Jenkins. Please proceed further.
+
diff --git a/cd3_automation_toolkit/documentation/user_guide/ExcelTemplates.md b/cd3_automation_toolkit/documentation/user_guide/ExcelTemplates.md
new file mode 100644
index 000000000..1d733001b
--- /dev/null
+++ b/cd3_automation_toolkit/documentation/user_guide/ExcelTemplates.md
@@ -0,0 +1,33 @@
+# **Excel Sheet Templates**
+CD3 Excel sheet is the main input for Automation Toolkit.
+
+Below are the CD3 templates for the latest release having standardised IAM Components (compartments, groups and policies), Network Components and Events & Notifications Rules as per CIS Foundations Benchmark for Oracle Cloud.
+
+Details on how to fill data into the Excel sheet can be found in the Blue section of each sheet inside the Excel file. Make appropriate changes to the templates eg region and use for deployment.
+
+
+
+**CD3 Excel templates for OCI core services:**
+
+|Excel Sheet | Purpose |
+|-----------|----------------------------------------------------------------------------------------------------------------------------|
+| [CD3-Blank-template.xlsx](/cd3_automation_toolkit/example) | Choose this template while exporting the existing resources from OCI into the CD3 and Terraform.|
+| [CD3-CIS-template.xlsx](/cd3_automation_toolkit/example) | This template has auto-filled in data of CIS Landing Zone for DRGv2. Choose this template to create Core OCI Objects (IAM, Tags, Networking, Instances, LBR, Storage, Databases) |
+|[CD3-HubSpoke-template](/cd3_automation_toolkit/example) | This template has auto-filled in data for a Hub and Spoke model of networking. Choose this template to create Core OCI Objects (IAM, Tags, Networking, Instances, LBR, Storage, Databases)|
+|[CD3-SingleVCN-template](/cd3_automation_toolkit/example) | This template has auto-filled in data for a Single VCN model of networking. Choose this template to create Core OCI Objects (IAM, Tags, Networking, Instances, LBR, Storage, Databases)|
+
+
+
+
+**CD3 Excel template for OCI Management services:**
+
+
+|Excel Sheet| Purpose |
+|-----------|----------------------------------------------------------------------------------------------------------------------------|
+|[CD3-CIS-ManagementServices-template.xlsx](/cd3_automation_toolkit/example) | This template has auto-filled in data of CIS Landing Zone. Choose this template while creating the components of Events, Alarms, Notifications and Service Connectors|
+
+
+
+> The Excel Templates can also be found at _/cd3user/oci_tools/cd3_automation_toolkit/example_ inside the container.
+> After deploying the infra using any of the templates, please run [CIS compliance checker script](/cd3_automation_toolkit/documentation/user_guide/learn_more/CISFeatures.md#1-run-cis-compliance-checker-script))
+
diff --git a/cd3_automation_toolkit/documentation/user_guide/FAQ.md b/cd3_automation_toolkit/documentation/user_guide/FAQ.md
index 7b7828a6f..22d30ac26 100644
--- a/cd3_automation_toolkit/documentation/user_guide/FAQ.md
+++ b/cd3_automation_toolkit/documentation/user_guide/FAQ.md
@@ -12,14 +12,7 @@
**3. If I am already using the toolkit and my OCI tenancy has been subscribed to a new region, how do i use the new region with toolkit?**
-Follow below steps to start using the newly subscribed region with the toolkit:
- - Take backup of the existing out directory.
- - Create a new directory for the region say 'london' along with other region directories.
- - Copy all the terraform modules and .tf files, except the .auto.tfvars and .tfstate files from existing region directory to the new one
- - Modify the name of variables file (eg variables_london.tf)
- - Modify the region parameter in this variables_london.tf
- - Modify Image OCIDs in this variables file according to new region.
-
+Re-run createTenancyConfig.py with same details in tenancyConfig.properties file. It will keep existing region directories as is and create new directory for newly subscribed region.
**4. How do I upgrade an existing version of the toolkit to the new one without disrupting my existing tenancy files/directories?**
@@ -46,13 +39,7 @@ Terraform destroy on compartments or removing the compartments details from _ - Add _enable\_delete = true_ parameter to each of the compartment that needs to be deleted in _\_compartments.auto.tfvars_
-**7. I am getting Timeout Error during export of DRG Route Rules while exporting Network Components.**
-
-
-Toolkit exports all Dynamic as well as Static DRG route Rules and timesout if there is a large number of dynamic rules. As a workaround, edit line no 220 in file _/cd3user/oci\_tools/cd3\_automation\_toolkit\Network\BaseNetwork\exportRoutetable.py_.
-Change _vcn = VirtualNetworkClient(config, timeout=(30,120))_ to _vcn = VirtualNetworkClient(config, timeout=(90,300))_
-
-**8. I am getting 'Permission Denied' error while executing any commands inside the container.**
+**7. I am getting 'Permission Denied' error while executing any commands inside the container.**
When you are running the docker container from a Linux OS, if the outdir is on the root, you may get a permission denied error while executing steps like createAPIKey.py. In such scenarios, please follow the steps given below -
diff --git a/cd3_automation_toolkit/documentation/user_guide/GF-Jenkins.md b/cd3_automation_toolkit/documentation/user_guide/GF-Jenkins.md
new file mode 100644
index 000000000..b09cda24f
--- /dev/null
+++ b/cd3_automation_toolkit/documentation/user_guide/GF-Jenkins.md
@@ -0,0 +1,21 @@
+# Provisioning of Instances/OKE/SDDC/Database on OCI via Jenkins
+
+To provision OCI resources which require input ssh keys and source image details, update **variables_\.tf** file using CLI.
+
+**Step 1**:
+ Update required data in `/cd3user/tenancies//terraform_files///variables_.tf`.
+
+**Step 2**:
+ Execute GIT commands to sync these local changes with DevOps GIT Repo. Here are the steps.
+
+**Step 3**:
+ Execute setUpOCI pipeline from Jenkins dashboard with workflow type as **Create Resources in OCI(Greenfield Workflow)** and choose the respective options to create required services.
+
+
+
diff --git a/cd3_automation_toolkit/documentation/user_guide/GreenField-Jenkins.md b/cd3_automation_toolkit/documentation/user_guide/GreenField-Jenkins.md
new file mode 100644
index 000000000..c9287f8d6
--- /dev/null
+++ b/cd3_automation_toolkit/documentation/user_guide/GreenField-Jenkins.md
@@ -0,0 +1,88 @@
+# Create resources in OCI via Jenkins(Greenfield Workflow)
+
+## Execute setUpOCI Pipeline
+
+**Step 1**:
+ Choose the appropriate CD3 Excel sheet template from [Excel Templates](/cd3_automation_toolkit/documentation/user_guide/ExcelTemplates.md).
+Fill the CD3 Excel with appropriate values.
+
+
+**Step 2**:
+ Login to Jenkins URL with the user created after initialization and click on **setUpOCI pipeline** from Dashboard. Click on **'Build with Parameters'** from left side menu.
+
+
+
+>Note - Only one user at a time using the Jenkins setup is supported in the current release of the toolkit.
+
+
+**Step 3**:
+ Upload the above filled Excel sheet in **Excel_Template** section.
+
+
+
+>This will copy the Excel file at `/cd3user/tenancies/` inside the container. It will also take backup of existing Excel on the container by appending the current datetime if same filename is uploaded in multiple executions.
+
+
+**Step 4:**
+ Select the workflow as **Create Resources in OCI(Greenfield Workflow)**. Choose single or multiple MainOptions as required and then corresponding SubOptions.
+ Please [read](/cd3_automation_toolkit/documentation/user_guide/multiple_options_GF-Jenkins.md) while selcting multiple options simultaneously.
+ Below screenshot shows creation of Compartments (under Identity) and Tags.
+
+
+
+Click on **Build** at the bottom.
+
+
+**Step 5:**
+ setUpOCI pipeline is triggered and stages are executed as shown below.
+This will run the python script to generate the terraform auto.tfvars. Once created, it will commit to the OCI Devops GIT Repo and then it will also launch terraform-apply pipelines for the services chosen (Stage:phoenix/identity and Stage:phoenix/tagging in the below screenshot).
+
+
+
+## Execute terraform Pipelines
+Terraform pipelines are auto triggered parallely from setUpOCI pipeline based on the services selected (the last two stages in above screenshot show trigger of terraform pipelines).
+
+**Step 1**:
+
+Click on 'Logs' for Stage: phoenix/identity and click on the pipeline link.
+
+> ***Note - Navigating to Dashboard displays pipelines that are in running state at the bottom left corner.***
+> ***Or you can also navigate from Dashboard using the region based view (Dashboard -> phoenix View -> service specific pipeline)***
+> ***in this example it would be:***
+> ***terraform_files » phoenix » tagging » terraform-apply***
+> ***terraform_files » phoenix » identity » terraform-apply***
+
+**Step 2**:
+ Stages of the terraform pipeline for apply are shown below:
+
+
+
+**Step 3**:
+ Review Logs for Terraform Plan and OPA stages by clicking on the stage and then 'Logs'.
+
+
+
+
+**Step 4**:
+ 'Get Approval' stage has timeout of 24 hours, if no action is taken the pipeline will be aborted after 24 hours. Click on this stage and click 'Proceed' to proceed with terraform apply or 'Abort' to cancel the terraform apply.
+
+
+
+
+**Step 5**:
+ Below screenshot shows Stage View after clicking on 'Proceed'. Login to the OCI console and verify that resources got created as required.
+
+
+
+**Step 6**:
+ Similarly click on 'Logs' for Stage: phoenix/tagging and click on the pipeline link and 'Proceed' or 'Abort' the terraform apply
diff --git a/cd3_automation_toolkit/documentation/user_guide/GreenField.md b/cd3_automation_toolkit/documentation/user_guide/GreenField.md
index dc41ea08a..a3360a468 100644
--- a/cd3_automation_toolkit/documentation/user_guide/GreenField.md
+++ b/cd3_automation_toolkit/documentation/user_guide/GreenField.md
@@ -1,17 +1,36 @@
-# Green Field Tenancies
-
-## Detailed Steps
-Below are the steps that will help to configure the Automation Tool Kit to support the Green Field Tenancies:
+# Create resources in OCI (Greenfield Workflow)
**Step 1**:
- Choose the appropriate CD3 Excel sheet template from [Excel Templates](/cd3_automation_toolkit/documentation/user_guide/RunningAutomationToolkit.md#excel-sheet-templates)
+ Choose the appropriate Excel sheet template from [Excel Templates](/cd3_automation_toolkit/documentation/user_guide/ExcelTemplates.md)
**Step 2**:
- Fill the CD3 Excel with appropriate values specific to the client and put at the appropriate location.
- Modify/Review [setUpOCI.properties](/cd3_automation_toolkit/documentation/user_guide/RunningAutomationToolkit.md#setupociproperties) with **non_gf_tenancy** set to **false** as shown below:
+ Fill the Excel with appropriate values and put at the appropriate location.
+ Modify/Review _/cd3user/tenancies//\_setUpOCI.properties_ with **workflow_type** set to **create_resources** as shown below:
+```ini
+#Input variables required to run setUpOCI script
+
+#path to output directory where terraform file will be generated. eg /cd3user/tenancies//terraform_files
+outdir=/cd3user/tenancies/demotenancy/terraform_files/
+
+#prefix for output terraform files eg like demotenancy
+prefix=demotenancy
+
+# auth mechanism for OCI APIs - api_key,instance_principal,session_token
+auth_mechanism=api_key
+
+#input config file for Python API communication with OCI eg /cd3user/tenancies//.config_files/_config;
+config_file=/cd3user/tenancies/demotenancy/.config_files/demotenancy_oci_config
+
+# Leave it blank if you want single outdir or specify outdir_structure_file.properties containing directory structure for OCI services.
+outdir_structure_file=/cd3user/tenancies/demotenancy/demotenancy_outdir_structure_file.properties
-![image](https://user-images.githubusercontent.com/103508105/221797142-c780dbd6-883f-450f-9929-dce81d32079e.png)
+#path to cd3 excel eg /cd3user/tenancies//CD3-Customer.xlsx
+cd3file=/cd3user/tenancies/demotenancy/CD3-Blank-template.xlsx
+#specify create_resources to create new resources in OCI(greenfield workflow)
+#specify export_resources to export resources from OCI(non-greenfield workflow)
+workflow_type=create_resources
+```
**Step 3**:
Execute the SetUpOCI.py script to start creating the terraform configuration files.
@@ -57,7 +76,7 @@ Follow the below steps to quickly provision a compartment on OCI.
2. Edit the _setUpOCI.properties_ at location:_/cd3user/tenancies //\_setUpOCI.properties_ with appropriate values.
- Update the _cd3file_ parameter to specify the CD3 excel sheet path.
- - Set the _non_gf_tenancy_ parameter value to _false_. (for Greenfield Workflow.)
+ - Set the _workflow_type_ parameter value to _create_resources_. (for Greenfield Workflow.)
3. Change Directory to 'cd3_automation_toolkit' :
```cd /cd3user/oci_tools/cd3_automation_toolkit/```
@@ -66,7 +85,7 @@ Follow the below steps to quickly provision a compartment on OCI.
```python setUpOCI.py /cd3user/tenancies//_setUpOCI.properties```
-4. Choose option to create compartments under 'Identity' from the displayed menu. Once the execution is successful, _\_compartments.auto.tfvars_ file will be generated under the folder _/cd3user/tenancies//terraform_files/_
+4. Choose option to create compartments under 'Identity' from the displayed menu. Once the execution is successful, _\_compartments.auto.tfvars_ file will be generated under the folder _/cd3user/tenancies//terraform_files//_
Navigate to the above path and execute the terraform commands:
_terraform init_
@@ -80,7 +99,7 @@ Follow the below steps to quickly provision a compartment on OCI.
diff --git a/cd3_automation_toolkit/documentation/user_guide/Intro-Jenkins.md b/cd3_automation_toolkit/documentation/user_guide/Intro-Jenkins.md
new file mode 100644
index 000000000..63490648d
--- /dev/null
+++ b/cd3_automation_toolkit/documentation/user_guide/Intro-Jenkins.md
@@ -0,0 +1,80 @@
+
+## **Introduction to Jenkins with the toolkit**
+
+### Jenkins Dashbord
+
+1. setUpOCI Pipeline
+2. terraform_files Folder
+3. Region based Views (including Global directory)
+
+
+
+
+### 1. setUpOCI Pipeline
+
+This is equivalent to running *setUpOCI.py* from CLI. This will generate the terraform **.auto.tfvars** files based on the CD3 Excel sheet input for the services chosen and commit them to OCI Devops GIT repo. This will also trigger **terraform-apply** pipelines for the corresponding services chosen in setUpOCI pipeline.
+
+Below table shows the stages executed in this pipeline along with their description:
+
+
+
+Expand this to view setUpOCI Pipeline Stages
+
+|Stage Name | Description | Possible Outcomes |
+| --------------- | ------------ | ----------------- |
+| Validate Input Parameters | Validates input file name/size, selected parameters | Displays Unstable if any of the validation fails. Pipeline stops further execution in that case. |
+| Update setUpOCI.properties | Updates _setUpOCI.properties with input filename and workflow_type | Displays Failed if any issue during execution |
+| Execute setUpOCI | Executes python code to generate required tfvars files. The console output for this stage is similar to setUpOCI.py execution via CLI. Multiple options selected will be processed sequentially in this stage. | Displays Failed if any issue occurs during its execution. Further stages are skipped in that case. |
+| Run Import Commands | Based on the workflow_type as 'Export Resources from OCI', this stage invokes execution of tf_import_commands_\_nonGF.sh shell scripts which will import the exported objects into tfstate. tf_import_commands for multiple options selected will be processed sequentially in this stage. This stage is skipped for 'Create Resources in OCI' workflow | Displays Failed if any issue occurs during its execution. Further stages are skipped in that case. |
+| Git Commit | Commits the terraform_files folder to OCI DevOps GIT Repo. This will trigger respective terraform_pipelines| Pipeline stops further execution if there is nothing to commit. In some cases when tfvars was generated in previous execution, you can navigate to terrafom-apply pipeline and trigger that manually |
+| Trigger Terraform Pipelines | Corresponding terraform apply pipelines are auto triggered based on the service chosen | |
+
+
+
+
+### 2. terraform_files Folder
+
+This is equivalent to **/cd3user/tenancies//terraform_files** folder on your local system.
+The region directories along with all service directories, are present under this terraform_files folder.
+Inside each service directory, pipelines for **terraform-apply** and **terraform-destroy** are present.
+
+The terraform pipelines are either triggered automatically from setUpOCI pipeline or they can be triggered manually by navigating to any service directory path.
+
+
+
+Expand this to view terraform-apply Pipeline Stages
+
+|Stage Name | Description | Possible Outcomes |
+| --------------- | ------------ | ----------------- |
+| Checkout SCM | Checks out the latest terraform_files folder from DevOps GIT repo | |
+| Terraform Plan | Runs terraform plan against the checked out code and saves it in tfplan.out | Pipeline stops further execution if terraform plan shows no changes. Displays Failed if any issue while executing terraform plan |
+| OPA | Runs the above genrated terraform plan against Open Policies and displays the violations if any | Displays Unstable if any OPA rule is violated |
+| Get Approval | Approval Stage for reviewing the terraform plan. There is 24 hours timeout for this stage. | Proceed - goes ahead with Terraform Apply stage. Abort - pipeline is aborted and stops furter execution |
+|Terraform Apply | Applies the terraform configurations | Displays Failed if any issue while executing terraform apply |
+
+
+
+
+
+Expand this to view terraform-destroy Pipeline Stages
+
+|Stage Name | Description | Possible Outcomes |
+| --------------- | ------------ | ----------------- |
+| Checkout SCM | Checks out the latest terraform_files folder from DevOps GIT repo | |
+| Terraform Destroy Plan | Runs `terraform plan -destroy` against the checked out code | Displays Failed if any issue in plan output |
+| Get Approval | Approval Stage for reviewing the terraform plan. There is 24 hours timeout for this stage. | Proceed - goes ahead with Terraform Destroy stage. Abort - pipeline is aborted and stops furter execution |
+|Terraform Destroy | Destroys the terraform configurations | Displays Failed if any issue while executing terraform destroy |
+
+
+
+### 3. Region Based Views
+When you click on any of the views, it displays all terraform-apply and terraform-destroy pipelines in single screen. This can also be used to trigger the terraform pipelines. This also includes Global view for global services like RPC.
+
+
+
diff --git a/cd3_automation_toolkit/documentation/user_guide/Jobs_Migration.md b/cd3_automation_toolkit/documentation/user_guide/Jobs_Migration.md
new file mode 100755
index 000000000..1932ac501
--- /dev/null
+++ b/cd3_automation_toolkit/documentation/user_guide/Jobs_Migration.md
@@ -0,0 +1,82 @@
+# Migrate Jobs from Automation Toolkit Jenkins to Customer Jenkins Environment
+
+
+1. Copy Jobs Folder
+ - Copy the folders from the Automation Toolkit Jenkins home path `/cd3user/tenancies/jenkins_home/jobs/` to the corresponding home directory in the Customer Jenkins instance (typically `/var/jenkins_home`).
+
+ ![image](https://github.com/unamachi/cd3-automation-toolkit/assets/103548537/5a1f54f1-3e50-4ec7-8634-494eec65ce56)
+
+2. Set up OCI Devops repository SSH Authentication
+ - Ensure SSH authentication is configured and operational on the Customer Jenkins instance. For detailed instructions, refer to the [OCI Code Repository documentation](https://docs.oracle.com/en-us/iaas/Content/devops/using/ssh_auth.htm).
+
+ > Note - Steps to change the GIT repo are explained in next section.
+
+3. Ensure Availability of Ansi Color Plugin
+ - Confirm the presence of the Ansi color plugin in the Customer Jenkins instance. This plugin is utilized in Automation Toolkit pipeline Groovy code and is necessary if not already installed. Plugin link: [Ansicolor Plugin](https://plugins.jenkins.io/ansicolor/)
+
+4. Install Terraform Binary
+ - Make sure the Terraform binary is installed and accessible for the Jenkins user within the Jenkins instance. Installation guide: [Terraform Installation](https://developer.hashicorp.com/terraform/install)
+
+5. Update Optional Attribute Field inside Terraform Provider Block at `/cd3user/tenancies//terraform_files//provider.tf`
+ - Include an attribute as highlighted below within the Terraform provider block. This is optional but necessary in case Terraform plan encounters an error.
+
+ experiments = [module_variable_optional_attrs]
+
+ ![image](https://github.com/unamachi/cd3-automation-toolkit/assets/103548537/2e1593ee-e4cc-4439-8ffa-97d39dda16a6)
+
+6. Update the correct value for private_key_path variable in `/cd3user/tenancies//terraform_files//variables_.tf`
+
+7. Configure S3 Backend Credentials in Customer Jenkins Instance
+ - Update the correct path within the `backend.tf` file for Terraform.
+
+ ![image](https://github.com/unamachi/cd3-automation-toolkit/assets/103548537/bfd6d2a2-7384-4bb0-a30b-5b7fd63c0e9b)
+
+8. Push the above changes to Devops GIT repository so that pipline can get the latest commits/changes and execute it.
+
+9. Stop/Start the Customer Jenkins Instance for the changes to take effect. This is applicable for any configuration changes in Jenkins.
+
+10. Job and Pipeline Configuration
+ - Verify that the specified jobs and pipelines, initialized by the Automation Toolkit, are visible in the Customer Jenkins instance.
+
+ ![image](https://github.com/unamachi/cd3-automation-toolkit/assets/103548537/3fca2b65-78b0-4528-a821-c43b5950cc90)
+
+11. Pipeline Job Output
+
+ ![image](https://github.com/unamachi/cd3-automation-toolkit/assets/103548537/4bb57802-1594-4361-9c54-46022abf190a)
+
+
+# Update the Git URL for all pipeline jobs in the Customer Jenkins(if required).
+
+1. Remove terraform_files folder under /jobs folder
+2. Create `jenkins.properties` File
+ - Copy the `jenkins.properties` file from Automation Toolkit Jenkins home folder `/cd3users/tenancies/jenkins_home/` to the customer jenkins home (typically `/var/jenkins_home/`) directory in customer Jenkins Instance (Below is sample content):
+
+ git_url= "ssh://devops.scmservice.us-phoenix-1.oci.oraclecloud.com/namespaces//projects/toolkitdemo-automation-toolkit-project/repositories/toolkitdemo-automation-toolkit-repo"
+ regions=['london', 'phoenix']
+ services=['identity', 'tagging', 'network', 'loadbalancer', 'vlan', 'nsg', 'compute', 'database', 'fss', 'oke', 'ocvs', 'security', 'managementservices', 'budget', 'cis', 'oss', 'dns']
+ outdir_structure=["Multiple_Outdir"]
+
+
+3. Update the `git_url` in the `jenkins.properties` File
+ - Open the `jenkins.properties` file located in the `/var/jenkins_home/` directory.
+ - Update the `git_url` in the file with the new Git server URL.
+
+ ![image](https://github.com/unamachi/cd3-automation-toolkit/assets/103548537/2056b8a3-c27e-481a-893a-a2ffba628c03)
+
+
+4. Copy `01_jenkins-config.groovy` File
+ - Copy the `01_jenkins-config.groovy` file from the Automation Toolkit Jenkins path (`/cd3user/tenancies/jenkins_home/init.groovy.d`) to the init path of the Customer Jenkins instance.
+ - Update the path to the groovy file accordingly.
+
+ ![image](https://github.com/unamachi/cd3-automation-toolkit/assets/103548537/348db348-7eee-47ed-88f4-32f1ecd25e0b)
+
+
+5. Restart Customer Jenkins Instance
+ - Stop and start the Customer Jenkins instance to apply the changes.
+ - After that, all Git URLs will be updated and point to new Git Url inside pipeline jobs.
+
+ ![image](https://github.com/unamachi/cd3-automation-toolkit/assets/103548537/83dc5e7a-4ceb-44a1-871f-4d9e314a3ce1)
+
+6. Ensure SSH Authentication
+ - Confirm that SSH authentication is enabled for the new GIT repository from the Jenkins instance.
+ - Alternatively, use the respective authentication method if relying on other methods.
diff --git a/cd3_automation_toolkit/documentation/user_guide/KnownBehaviour.md b/cd3_automation_toolkit/documentation/user_guide/KnownBehaviour.md
index 69ff35261..db8ddef29 100644
--- a/cd3_automation_toolkit/documentation/user_guide/KnownBehaviour.md
+++ b/cd3_automation_toolkit/documentation/user_guide/KnownBehaviour.md
@@ -1,29 +1,35 @@
# Expected Behaviour Of Automation Toolkit
-### NOTE:
-1. Automation Tool Kit DOES NOT support the creation/export of duplicate resources.
-2. DO NOT modify/remove any commented rows or column names. You may re-arrange the columns if needed.
-3. A double colon (::) or Semi-Colon (;) has a special meaning in the Tool Kit. Do not use them in the OCI data / values.
-4. Do not include any unwanted space in cells you fill in; do not place any empty rows in between.
-5. Any entry made/moved post \ in any of the tabs of CD3 will not be processed. Any resources created by the automation & then moved after the \ will cause the resources to be removed.
-6. The components that get created as part of VCNs Tab (Example: IGW, SGW, LPG, NGW, DRG) will have the same set of Tags attached to them.
-7. Automation Tool Kit does not support sharing of Block Volumes.
-8. Some points to consider while modifying networking components are:
- - Converting the exported VCN to Hub/Spoke/Peer VCN is not allowed. Route Table rules based on the peering for new LPGs to existing VCNs will not be auto populated. Users are requested to add an entry to the RouteRulesInOCI sheet to support the peering rules.
- - Adding a new VCN as Hub and other new VCNs as Spoke/Peer is allowed. Gateways will be created as specified in VCNs sheet.
- - Adding new VCNs as None is allowed. Gateways will be created as specified in VCNs sheet.
- - The addition of new Subnets to exported VCNs and new VCNs is allowed.
-9. When you have exported Identity and Network services together in single outdirectory for the first time and executing identity import script. You might see import failure with below error message. Execute Major network import script first then run Identity import script.
-
-```
-!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
-4 problems:
-- Failed to serialize resource instance in state: Instance data.oci_core_drg_route_distributions.drg_route_distributions["DRG-ASH_Autogenerated-Import-Route-Distribution-for-ALL-routes"] has status ObjectPlanned, which cannot be saved in state.
-- Failed to serialize resource instance in state: Instance data.oci_core_drg_route_distributions.drg_route_distributions["DRG-ASH_Autogenerated-Import-Route-Distribution-for-VCN-Routes"] has status ObjectPlanned, which cannot be saved in state.
-```
+> [!NOTE]
+> 1. Automation Tool Kit *DOES NOT* support the creation/export of duplicate resources.
+> 2. Automation Tool Kit *DOES NOT* support sharing of Block Volumes.
+
+> [!WARNING]
+> 1. DO NOT modify/remove any commented rows or column names. You may re-arrange the columns if needed.
+> 2. A double colon (::) or Semi-Colon (;) has a special meaning in the Tool Kit. Do not use them in the OCI data / values.
+> 3. Do not include any unwanted space in cells you fill in; do not place any empty rows in between.
+> 4. Any entry made/moved post \ in any of the tabs of CD3 will not be processed. Any resources created by the automation & then moved after the \ will cause the resources to be removed.
+
+> [!IMPORTANT]
+> The components that get created as part of VCNs Tab (Example: IGW, SGW, LPG, NGW, DRG) will have the same set of Tags attached to them.
+> Some points to consider while modifying networking components are:
+> 1. Converting the exported VCN to Hub/Spoke/Peer VCN is not allowed. Route Table rules based on the peering for new LPGs to existing VCNs will not be auto populated. Users are requested to add an entry to the RouteRulesInOCI sheet to support the peering rules.
+> 2. Adding a new VCN as Hub and other new VCNs as Spoke/Peer is allowed. Gateways will be created as specified in VCNs sheet.
+> 3. Adding new VCNs as None is allowed. Gateways will be created as specified in VCNs sheet.
+> 4. The addition of new Subnets to exported VCNs and new VCNs is allowed.
+> 5. You might come across below error during export of NSGs(while runnig terraform import commands for NSGs). It occurs when NSG and the VCN are in different compartments. In such cases, please modify \_nsgs.auto.tfvars, specify the compartment name of the VCN in network_compartment_id field of the problematic NSG.
+
+> 6. When you have exported Identity and Network services together in single outdirectory for the first time and executing identity import script. You might see import failure with below error message. Execute Major network import script first then run Identity import script.
+ ```
+ !!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
+ 4 problems:
+ - Failed to serialize resource instance in state: Instance data.oci_core_drg_route_distributions.drg_route_distributions["DRG-ASH_Autogenerated-Import-Route-Distribution-for-ALL-routes"] has status ObjectPlanned, which cannot be saved in state.
+ - Failed to serialize resource instance in state: Instance data.oci_core_drg_route_distributions.drg_route_distributions["DRG-ASH_Autogenerated-Import-Route-Distribution-for-VCN-Routes"] has status ObjectPlanned, which cannot be saved in state.
+ ```
## Terraform Behavior
-- Create a Load Balancer with Reserved IP: When you create a LBaaS with reserved ip as "Y" and do a terraform apply, everything will go smooth and be in sync for the first time. If you do a terraform plan immediately (post apply), you will find that the plan changes the private ip of load balancer to null.
+#### 1.
+Create a Load Balancer with Reserved IP: When you create a LBaaS with reserved ip as "Y" and do a terraform apply, everything will go smooth and be in sync for the first time. If you do a terraform plan immediately (post apply), you will find that the plan changes the private ip of load balancer to null.
![image](https://user-images.githubusercontent.com/122371432/214501615-c84d26bb-1227-42b7-bc86-a6f82020aab0.png)
@@ -35,29 +41,32 @@
Once you do the above change, and then execute a terraform plan/apply, you will get the below error and it can be ignored.
![image](https://user-images.githubusercontent.com/122371432/214502222-09eb5bb2-4a21-43fa-89b9-6540324c7f75.png)
-
-
-- While exporting and synching the tfstate file for LBaaS Objects, the user may be notified that a few components will be modified on apply. In such scenarios, add the attributes that the Terraform notifies to be changed to the appropriate CD3 Tab of Load Balancer and uncomment the parameter from Jinja2 Templates and Terraform (.tf) files. Re-run the export.
+
+#### 2.
+While exporting and synching the tfstate file for LBaaS Objects, the user may be notified that a few components will be modified on apply. In such scenarios, add the attributes that the Terraform notifies to be changed to the appropriate CD3 Tab of Load Balancer and uncomment the parameter from Jinja2 Templates and Terraform (.tf) files. Re-run the export.
-- Add a new column - "Freeform Tags" to the CD3 Excel Sheets as per necessity, to export the tags associated with the resource as well. If executed as-is, Terraform may prompt you to modify resources based on Tags.
+#### 3.
+Add a new column - "Freeform Tags" to the CD3 Excel Sheets as per necessity, to export the tags associated with the resource as well. If executed as-is, Terraform may prompt you to modify resources based on Tags.
**Example:**
-- Toolkit will create TF for only those DRGs which are part of CD3 and skip Route Tables for the DRGs created outside of CD3. This will also synch DRG rules in your tenancy with the terraform state.
+#### 4.
+Toolkit will create TF for only those DRGs which are part of CD3 and skip Route Tables for the DRGs created outside of CD3. This will also synch DRG rules in your tenancy with the terraform state.
> **Note**
> When there are changes made in the OCI console manually, the above options of export and modify can be helpful to sync up the contents/objects in OCI to TF.
-- Match All criteria specified for Route Distribution Statement In DRGs sheet will show below output each time you do terraform plan:
+#### 5.
+Match All criteria specified for Route Distribution Statement In DRGs sheet will show below output each time you do terraform plan:
![image](https://user-images.githubusercontent.com/122371432/214504858-2c5ba6af-b030-4f72-b6d9-8bc37b5902cf.png)
The service api is designed in such a way that it expects an empty list for match all. And it sends back an empty list in the response every time. Hence this behaviour from terraform side. This can be safely ignored.
-
-- Export process for non greenfield tenancies v6.0 or higher will try to revert SGW for a VCN to point to all services if it was existing for just object storage. You will get output similiar to below when terraform plan is run (Option 3 with non-gf_tenancy set to true).
+#### 6.
+Export process for non greenfield tenancies v6.0 or higher will try to revert SGW for a VCN to point to all services if it was existing for just object storage. You will get output similiar to below when terraform plan is run (Option 3 with workflow_type set to export_resources).
```
# oci_core_service_gateway.VCN_sgw will be updated in-place
@@ -101,7 +110,8 @@
}
```
-- If the description field is having any newlines in the tenancy then the export of the component and tf synch will show output similair to below:
+#### 7.
+If the description field is having any newlines in the tenancy then the export of the component and tf synch will show output similair to below:
```
# module.iam-policies[“ConnectorPolicy_notifications_2023-03-06T21-54-41-655Z”].oci_identity_policy.policy will be updated in-place
@@ -123,12 +133,11 @@
This is how terraform handles newlines in the fields. Pleage ignore this and proceed with terraform apply.
-- You might come across below error during export of NSGs(while runnig terraform import commands for NSGs)
- ![image](https://github.com/oracle-devrel/cd3-automation-toolkit/assets/103508105/5a50cdb5-b6cf-49fa-b488-1419d32c6b13)
- This occurs when NSG and the VCN are in different compartments. In such cases, please modify _nsgs.auto.tfvars, specify the compartment name of the VCN in network_compartment_id field of the problematic NSG.
-
-- Terraform ordering changes observed during plan phase for OCI compute plugin's.
+#### 8.
+Terraform ordering changes observed during plan phase for OCI compute plugins.
![image](https://github.com/oracle-devrel/cd3-automation-toolkit/assets/103548537/f6a2d481-5e79-484b-a24e-a8329e8b6626)
- It changes the order of plugin's in terraform state file and doesn't change anything in OCI for compute resource.
+ It changes the order of plugin's in terraform state file and doesn't change anything in OCI console for compute resource.
+
+
diff --git a/cd3_automation_toolkit/documentation/user_guide/Launch_Docker_container.md b/cd3_automation_toolkit/documentation/user_guide/Launch_Docker_container.md
index f2e63d6f0..9bedaae0b 100644
--- a/cd3_automation_toolkit/documentation/user_guide/Launch_Docker_container.md
+++ b/cd3_automation_toolkit/documentation/user_guide/Launch_Docker_container.md
@@ -1,34 +1,39 @@
-# Launch Docker Container
+# Launch the Container
To ease the execution of toolkit, we have provided the steps to build an image which encloses the code base and its package dependencies. Follow the steps provided below to clone the repo, build the image and finally launch the container.
## Clone the repo
-* Open your terminal and change the directory to the one where you want to download the git repo.
+* Open your terminal and navigate to the directory where you plan to download the Git repo.
* Run the git clone command as shown below:
```git clone https://github.com/oracle-devrel/cd3-automation-toolkit```
-* Once the cloning command completes successfully, the repo will replicate to the local directory.
+* Once the cloning command is executed successfully, the repo will replicate to the local directory.
## Build an image
* Change directory to 'cd3-automation-toolkit'(i.e. the cloned repo in your local).
* Run ```docker build --platform linux/amd64 -t cd3toolkit:${image_tag} -f Dockerfile --pull --no-cache .```
- Note : ${image_tag} should be replaced with suitable tag as per your requirements/standards.
+ Note : ${image_tag} should be replaced with suitable tag as per your requirements/standards. eg v2024.1.0
The period (.) at the end of the docker build command is required.
## Save the image (Optional)
* Run ```docker save cd3toolkit:${image_tag} | gzip > cd3toolkit_${image_tag}.tar.gz```
-## Run CD3 container alongwith VPN (Applicable for VPN users only)
+## Run the container alongwith VPN (Applicable for VPN users only)
* Connect to the VPN.
* Make sure you are using version **1.9** for **Rancher deskop**, if not please install the latest.
* Make sure to Enable **Networking Tunnel** under Rancher settings as shown in the screenshot below,
-* Login to the CD3 docker container using next section and set the proxies which helps to connect internet(if any) from container.
+* Login to the CD3 docker container using next section and set the proxies(if any) which helps to connect internet from the container.
+
+## Run the container
+* Run ```docker run --platform linux/amd64 -it -p :8443 -d -v :/cd3user/tenancies :```
+
+ Eg for Mac: ```docker run --platform linux/amd64 -it -p 8443:8443 -d -v /Users//mount_path:/cd3user/tenancies cd3toolkit:v2024.1.0```
+
+ Eg for Windows: ```docker run --platform linux/amd64 -it -p 8443:8443 -d -v D:/mount_path:/cd3user/tenancies cd3toolkit:v2024.1.0```
-## Run the CD3 container
-* Run ```docker run --platform linux/amd64 -it -d -v :/cd3user/tenancies :```
* Run ```docker ps```
diff --git a/cd3_automation_toolkit/documentation/user_guide/NetworkingScenariosGF-Jenkins.md b/cd3_automation_toolkit/documentation/user_guide/NetworkingScenariosGF-Jenkins.md
new file mode 100644
index 000000000..165d6be48
--- /dev/null
+++ b/cd3_automation_toolkit/documentation/user_guide/NetworkingScenariosGF-Jenkins.md
@@ -0,0 +1,188 @@
+# Executing Networking Scenarios using toolkit via Jenkins
+
+## Managing Network for Greenfield Workflow
+- [Create Network](#create-network)
+- [Modify Network](#modify-network)
+- [Modify Security Rules, Route Rules and DRG Route Rules](#modify-security-rules-route-rules-and-drg-route-rules)
+- [Sync manual changes done in OCI of Security Rules, Route Rules and DRG Route Rules with CD3 Excel Sheet and Terraform](#sync-manual-changes-done-in-oci-of-security-rules-route-rules-and-drg-route-rules-with-cd3-excel-sheet-and-terraform)
+- [Add/Modify/Delete NSGs](#addmodifydelete-nsgs)
+- [Add/Modify/Delete VLANs](#addmodifydelete-vlans)
+- [Remote Peering Connections](#rpcs)
+
+
+**NOTE-**
+
+### Create Network
+Creation of Networking components using Automation Toolkit involves four simple steps.
+ - Add the networking resource details to appropriate Excel Sheets.
+ - Running the setUpOCI pipeline in the toolkit to generate auto.tfvars.
+ - Executing terraform pipeline to provision the resources in OCI.
+ - Exporting the automatically generated Security Rules and Route Rules by the toolkit to CD3 Excel Sheet.
+
+Below are the steps to create Network that includes VCNs, Subnets, DHCP, DRG, Security List, Route Tables, DRG Route Tables, NSGs, etc.
+
+1. Choose appropriate excel sheet from [Excel Templates](/cd3_automation_toolkit/documentation/user_guide/ExcelTemplates.md) and fill the required Network details in the Networking Tabs - VCNs, DRGs, VCN Info, DHCP, Subnets, NSGs tabs.
+
+2. Execute the _setupOCI_ pipeline with _Workflow_ as _Create Resources in OCI(Greenfield Workflow)_
+
+3. Choose option _'Validate CD3'_ and then _'Validate Networks'_ to check for syntax errors in Excel sheet. Examine the log file generated at _/cd3user/tenancies//\_cd3validator.log_. If there are errors, please rectify them accordingly and proceed to the next step.
+
+4. Choose _'Create Network'_ under _'Network'_ from the displayed options. Click on Build.
+
+
+5. It will show different stages of execution of _setUpOCI_ pipeline and also launch the _terraform-apply_ pipeline for 'network'.
+6. Click on Proceed for 'Get Approval' stage of the terraform pipeline.
+
+ This completes the creation of Networking components in OCI. Verify the components in console. However the details of the default security lists and default route tables are not available in the CD3 Excel sheet yet. Inorder to export that data please follow the below steps:
+
+7. Execute the _setupOCI.py_ pipeline with _Workflow_ as _Create Resources in OCI(Greenfield Workflow)_
+8. Choose _'Network'_ from the displayed options. Choose below sub-options: (Make sure to choose all the three optionsfor the first time)
+ - Security Rules
+ - Export Security Rules (From OCI into SecRulesinOCI sheet)
+ - Route Rules
+ - Export Route Rules (From OCI into RouteRulesinOCI sheet)
+ - DRG Route Rules
+ - Export DRG Route Rules (From OCI into DRGRouteRulesinOCI sheet)
+ Click on Build.
+
+
+
+
+This completes the steps for Creating the Network in OCI and exporting the default rules to the CD3 Excel Sheet using the Automation Toolkit.
+
+ [Go back to Networking Scenarios](#executing-networking-scenarios-using-toolkit-via-jenkins)
+### Modify Network
+Modifying the Networking components using Automation Toolkit involves three simple steps.
+ - Add/modify the details of networking components like the VCNs, Subnets, DHCP and DRG in Excel Sheet.
+ - Running the the setUpOCI pipeline in the toolkit to generate auto.tfvars.
+ - Executing Terraform pipeline to provision/modify the resources in OCI.
+
+ ***Note***: Follow [these steps](#modify-security-rules-route-rules-and-drg-route-rules) to modify Security Rules, Route Rules and DRG Route Rules
+
+_Steps in detail_:
+1. Modify your excel sheet to update required data in the Tabs - VCNs, DRGs, VCN Info, DHCP and Subnets.
+2. Execute the _setupOCI.py_ pipeline with _Workflow_ as _Create Resources in OCI(Greenfield Workflow)_
+3. To Validate the CD3 excel Tabs - Choose option _'Validate CD3'_ and _'Validate Networks'_ from sub-menu to check for syntax errors in Excel sheet. Examine the log file generated at _/cd3user/tenancies//\_cd3validator.logs_. If there are errors, please rectify them accordingly and proceed to the next step.
+4. Choose option to _'Modify Network'_ under _'Network'_ from the displayed options. Once the execution is successful, multiple .tfvars related to networking like _\_major-objects.auto.tfvars_ and more will be generated under the folder _/cd3user/tenancies//terraform_files//_. Existing files will move into respective backup folders.
+
+ **Note-**: Make sure to export Sec Rules, Route Rules, DRG Route Rules to CD3 Excel Sheet before executing this option.
+
+6. It will show different stages of execution of _setUpOCI_ pipeline and also launch the _terraform-apply_ pipeline for 'network'.
+7. Click on Proceed for 'Get Approval' stage of the terraform pipeline.
+
+This completes the modification of Networking components in OCI. Verify the components in console.
+
+ [Go back to Networking Scenarios](#executing-networking-scenarios-using-toolkit-via-jenkins)
+### Modify Security Rules, Route Rules and DRG Route Rules
+
+Follow the below steps to add, update or delete the following components:
+- Security Lists and Security Rules
+- Route Table and Route Rules
+- DRG Route Table and DRG Route Rules
+
+1. Modify your excel sheet to update required data in the Tabs - RouteRulesInOCI, SecRulesInOCI, DRGRouteRulesInOCI tabs.
+
+2. Execute the _setupOCI.py_ pipeline with _Workflow_ as _Create Resources in OCI(Greenfield Workflow)_
+
+3. Choose _'Network'_ from the displayed options. Choose below sub-options:
+ - Security Rules
+ - Add/Modify/Delete Security Rules (Reads SecRulesinOCI sheet)
+ - Route Rules
+ - Add/Modify/Delete Route Rules (Reads RouteRulesinOCI sheet)
+ - DRG Route Rules
+ - Add/Modify/Delete DRG Route Rules (Reads DRGRouteRulesinOCI sheet)
+
+ Once the execution is successful, _\_seclists.auto.tfvars_, _\_routetables.auto.tfvars_ and _\_drg-routetables.auto.tfvars_ file will be generated under the folder _/cd3user/tenancies//terraform_files/_. Existing files will move into respective backup folders.
+
+ **NOTE**: This will create TF for only those Security Lists and Route Tables in VCNs which are part of cd3 and skip any VCNs that have been created outside of cd3 execution.
+
+4. It will show different stages of execution of _setUpOCI_ pipeline and also launch the _terraform-apply_ pipeline for 'network'.
+5. Click on Proceed for 'Get Approval' stage of the terraform pipeline.
+
+ This completes the modification of Security Rules, Route Rules and DRG Route Rules in OCI. Verify the components in console.
+
+ [Go back to Networking Scenarios](#executing-networking-scenarios-using-toolkit-via-jenkins)
+### Sync manual changes done in OCI of Security Rules, Route Rules and DRG Route Rules with CD3 Excel Sheet and Terraform
+Follow the below process to export the rules to the same CD3 Excel Sheet as the one used to Create Network, and to sync the Terraform files with OCI whenever an user adds, modifies or deletes rules in OCI Console manually.
+
+**NOTE**: Make sure to close your Excel sheet during the export process.
+
+1. Execute the _setupOCI.py_ pipeline with _Workflow_ as _Create Resources in OCI(Greenfield Workflow)_
+
+2. Choose _'Network'_ from the displayed menu. Choose below sub-options:
+ - Security Rules
+ - Export Security Rules (From OCI into SecRulesinOCI sheet)
+ - Add/Modify/Delete Security Rules (Reads SecRulesinOCI sheet)
+ - Route Rules
+ - Export Route Rules (From OCI into RouteRulesinOCI sheet)
+ - Add/Modify/Delete Route Rules (Reads RouteRulesinOCI sheet)
+ - DRG Route Rules
+ - Export DRG Route Rules (From OCI into DRGRouteRulesinOCI sheet)
+ - Add/Modify/Delete DRG Route Rules (Reads DRGRouteRulesinOCI sheet)
+
+ Once the execution is successful, 'RouteRulesInOCI', 'SecRulesInOCI', 'DRGRouteRulesInOCI' tabs of the excel sheet will be updated with the rules exported from OCI. And _\_seclists.auto.tfvars_, _\routetables.auto.tfvars_ and _\drg-routetables.auto.tfvars_ file will be generated under the folder _/cd3user/tenancies//terraform_files//_
+
+ 4. It will show different stages of execution of _setUpOCI_ pipeline and also launch the _terraform-apply_ pipeline for 'network'.
+ 5. Click on Proceed for 'Get Approval' stage of the terraform pipeline.
+
+ This completes the export of Security Rules, Route Rules and DRG Route Rules from OCI. Terraform plan/apply should be in sync with OCI.
+
+ [Go back to Networking Scenarios](#executing-networking-scenarios-using-toolkit-via-jenkins)
+### Add/Modify/Delete NSGs
+Follow the below steps to update NSGs.
+
+1. Modify your excel sheet to update required data in the Tabs - NSGs.
+
+2. Execute the _setupOCI.py_ pipeline with _Workflow_ as _Create Resources in OCI(Greenfield Workflow)_
+
+3. Choose _'Network'_ from the displayed menu. Choose below sub-option:
+ - Network Security Groups
+ - Add/Modify/Delete NSGs (Reads NSGs sheet)
+
+ Once the execution is successful, _\_nsgs.auto.tfvars_ will be generated under the folder _/cd3user/tenancies//terraform_files//_. Existing files will move into respective backup folders.
+
+4. It will show different stages of execution of _setUpOCI_ pipeline and also launch the _terraform-apply_ pipeline for 'nsg'.
+5. Click on Proceed for 'Get Approval' stage of the terraform pipeline.
+
+This completes the modification of NSGs in OCI. Verify the components in console.
+
+ [Go back to Networking Scenarios](#executing-networking-scenarios-using-toolkit-via-jenkins)
+
+### Add/Modify/Delete VLANs
+Follow the below steps to update VLANs.
+
+1. Modify your excel sheet to update required data in the Tabs - SubnetsVLANs.
+2. Make sure that the RouteRulesinOCI sheet and corresponing terraform is in synch with route rules in OCI console. If not, please follow procedure specified in [Sync manual changes done in OCI of Security Rules, Route Rules and DRG Route Rules with CD3 Excel Sheet and Terraform](#sync-manual-changes-done-in-oci-of-security-rules-route-rules-and-drg-route-rules-with-cd3-excel-sheet-and-terraform)
+
+3. Execute the _setupOCI.py_ pipeline with _Workflow_ as _Create Resources in OCI(Greenfield Workflow)_
+4. Choose _'Network'_ from the displayed menu. Choose below sub-option:
+ - Add/Modify/Delete VLANs (Reads SubnetsVLANs sheet)
+
+ Once the execution is successful, _\_vlans.auto.tfvars_ will be generated under the folder _/cd3user/tenancies//terraform_files//_. Existing files will move into respective backup folders. _\routetables.auto.tfvars_ file will also be updated with the route table information specified for each VLAN.
+
+4. It will show different stages of execution of _setUpOCI_ pipeline and also launch the _terraform-apply_ pipeline for 'vlan' and 'network'.
+5. Click on Proceed for 'Get Approval' stage of the terraform pipeline.
+
+6. Again make sure to export the Route Rules in OCI into excel and terraform. Please follow procedure specified in [Sync manual changes done in OCI of Security Rules, Route Rules and DRG Route Rules with CD3 Excel Sheet and Terraform](#sync-manual-changes-done-in-oci-of-security-rules-route-rules-and-drg-route-rules-with-cd3-excel-sheet-and-terraform)
+
+This completes the modification of VLANs in OCI. Verify the components in console.
+
+### RPCs
+Remote VCN peering is the process of connecting two VCNs in different regions (but the same tenancy). The peering allows the VCNs' resources to communicate using private IP addresses without routing the traffic over the internet or through your on-premises network.
+
+ - Modify your excel sheet to update required data in the Tabs - DRGs.
+ - The source and target RPC details to be entered in DRG sheet for establishing a connection. Please check the example in excel file for reference.
+ - Make sure that the DRGRouteRulesinOCI sheet and corresponding to terraform is in synch with DRG route rules in OCI console. If not, please follow procedure specified in [Sync manual changes done in OCI of Security Rules, Route Rules and DRG Route Rules with CD3 Excel Sheet and Terraform](#sync-manual-changes-done-in-oci-of-security-rules-route-rules-and-drg-route-rules-with-cd3-excel-sheet-and-terraform)
+ - Global directory which is inside the customer outdir will have all RPC related files and scripts.
+ - The RPC resources(modules,provider configurations etc) are generated dynamically for the tenancy and can work along only with CD3 automation toolkit.
+ - Choose option 'Network' and then 'Customer Connectivity' for creating RPC in GreenField workflow.
+ - Output files are created under _/cd3user/tenancies//terraform_files/global/rpc_ directory
+
+ [Go back to Networking Scenarios](#executing-networking-scenarios-using-toolkit-via-jenkins)
+
diff --git a/cd3_automation_toolkit/documentation/user_guide/NetworkingScenariosGF.md b/cd3_automation_toolkit/documentation/user_guide/NetworkingScenariosGF.md
index c91c48478..9abe016bb 100644
--- a/cd3_automation_toolkit/documentation/user_guide/NetworkingScenariosGF.md
+++ b/cd3_automation_toolkit/documentation/user_guide/NetworkingScenariosGF.md
@@ -1,6 +1,6 @@
# Networking Scenarios
-## Greenfield Tenancies (Managing Network for Green Field Tenancies)
+## Managing Network for Greenfield Workflow
- [Create Network](#create-network)
- [Use an existing DRG in OCI while creating the network](#use-an-existing-drg-in-oci-while-creating-the-network)
- [Modify Network](#modify-network)
@@ -23,15 +23,15 @@ Creation of Networking components using Automation Toolkit involves four simple
Below are the steps in detail to create Network that includes VCNs, Subnets, DHCP, DRG, Security List, Route Tables, DRG Route Tables, NSGs, etc.
-1. Choose appropriate excel sheet from [Excel Templates](/cd3_automation_toolkit/documentation/user_guide/RunningAutomationToolkit.md#excel-sheet-templates) and fill the required Network details in the Networking Tabs - VCNs, DRGs, VCN Info, DHCP, Subnets, NSGs tabs.
+1. Choose appropriate excel sheet from [Excel Templates](/cd3_automation_toolkit/documentation/user_guide/ExcelTemplates.md) and fill the required Network details in the Networking Tabs - VCNs, DRGs, VCN Info, DHCP, Subnets, NSGs tabs.
-2. Execute the _setupOCI.py_ file with _non_gf_tenancy_ parameter value to _false_:
+2. Execute the _setupOCI.py_ file with _workflow_type_ parameter value to _create_resources_:
```python setUpOCI.py /cd3user/tenancies//_setUpOCI.properties```
3. Choose option _'Validate CD3'_ and then _'Validate Network(VCNs, Subnets, DHCP, DRGs)'_ to check for syntax errors in Excel sheet. Examine the log file generated at _/cd3user/tenancies//\_cd3validator.log_. If there are errors, please rectify them accordingly and proceed to the next step.
-4. Choose option to _'Create Network'_ under _'Network'_ from the displayed menu. Once the execution is successful, multiple .tfvars related to networking like _\_major-objects.auto.tfvars_ and more will be generated under the folder _/cd3user/tenancies//terraform_files/_
+4. Choose option to _'Create Network'_ under _'Network'_ from the displayed menu. Once the execution is successful, multiple .tfvars related to networking like _\_major-objects.auto.tfvars_ and more will be generated under the folder _/cd3user/tenancies//terraform_files//_
5. Navigate to the above path and execute the terraform commands:
_terraform init_
@@ -40,7 +40,7 @@ Below are the steps in detail to create Network that includes VCNs, Subnets, DHC
This completes the creation of Networking components in OCI. Verify the components in console. However the details of the default security lists and default route tables may not be available in the CD3 Excel sheet yet. Inorder to export that data please follow the below steps:
-6. Execute the _setupOCI.py_ file with _non_gf_tenancy_ parameter value to _false_:
+6. Execute the _setupOCI.py_ file with _workflow_type_ parameter value to _create_resources_:
```python setUpOCI.py /cd3user/tenancies//_setUpOCI.properties```
@@ -66,7 +66,7 @@ In some scenarios, a DRG has already been created in the tenancy and rest of the
→ Terraform Plan will indicate to add all the other components except DRG.
_terraform apply_
- Continue executing the remaining steps (from Step 6) of [Create Network](#1-create-network).
+ Continue executing the remaining steps (from Step 6) of [Create Network](#create-network).
[Go back to Networking Scenarios](#networking-scenarios)
### Modify Network
@@ -75,18 +75,18 @@ Modifying the Networking components using Automation Toolkit involves three simp
- Running the toolkit to generate auto.tfvars.
- Executing Terraform commands to provision/modify the resources in OCI.
- ***Note***: Follow [these steps](#3-modify-security-rules-route-rules-and-drg-route-rules) to modify Security Rules, Route Rules and DRG Route Rules
+ ***Note***: Follow [these steps](#modify-security-rules-route-rules-and-drg-route-rules) to modify Security Rules, Route Rules and DRG Route Rules
_Steps in detail_:
1. Modify your excel sheet to update required data in the Tabs - VCNs, DRGs, VCN Info, DHCP and Subnets.
-2. Execute the _setupOCI.py_ file with _non_gf_tenancy_ parameter value to _false_:
+2. Execute the _setupOCI.py_ file with _workflow_type_ parameter value to _create_resources_:
```python setUpOCI.py /cd3user/tenancies//_setUpOCI.properties```
3. To Validate the CD3 excel Tabs - Choose option _'Validate CD3'_ and _'Validate Network(VCNs, Subnets, DHCP, DRGs)'_ from sub-menu to check for syntax errors in Excel sheet. Examine the log file generated at _/cd3user/tenancies//\_cd3validator.logs_. If there are errors, please rectify them accordingly and proceed to the next step.
-4. Choose option to _'Modify Network'_ under _'Network'_ from the displayed menu. Once the execution is successful, multiple .tfvars related to networking like _\_major-objects.auto.tfvars_ and more will be generated under the folder _/cd3user/tenancies//terraform_files/_. Existing files will move into respective backup folders.
+4. Choose option to _'Modify Network'_ under _'Network'_ from the displayed menu. Once the execution is successful, multiple .tfvars related to networking like _\_major-objects.auto.tfvars_ and more will be generated under the folder _/cd3user/tenancies//terraform_files//_. Existing files will move into respective backup folders.
**Note-**: Make sure to export Sec Rules, Route Rules, DRG Route Rules to CD3 Excel Sheet before executing this option.
@@ -107,7 +107,7 @@ Follow the below steps to add, update or delete the following components:
1. Modify your excel sheet to update required data in the Tabs - RouteRulesInOCI, SecRulesInOCI, DRGRouteRulesInOCI tabs.
-2. Execute the _setupOCI.py_ file with _non_gf_tenancy_ parameter value to _false_:
+2. Execute the _setupOCI.py_ file with _workflow_type_ parameter value to _create_resources_:
```python setUpOCI.py /cd3user/tenancies//_setUpOCI.properties```
@@ -136,7 +136,7 @@ Follow the below process to export the rules to the same CD3 Excel Sheet as the
**NOTE**: Make sure to close your Excel sheet during the export process.
-1. Execute the _setupOCI.py_ file with _non_gf_tenancy_ parameter value to _false_:
+1. Execute the _setupOCI.py_ file with _workflow_type_ parameter value to _create_resources_:
```python setUpOCI.py /cd3user/tenancies//_setUpOCI.properties```
@@ -158,7 +158,7 @@ Follow the below process to export the rules to the same CD3 Excel Sheet as the
- DRG Route Rules
- Add/Modify/Delete DRG Route Rules (Reads DRGRouteRulesinOCI sheet)
- Once the execution is successful, _\_seclists.auto.tfvars_, _\routetables.auto.tfvars_ and _\drg-routetables.auto.tfvars_ file will be generated under the folder _/cd3user/tenancies//terraform_files/_
+ Once the execution is successful, _\_seclists.auto.tfvars_, _\routetables.auto.tfvars_ and _\drg-routetables.auto.tfvars_ file will be generated under the folder _/cd3user/tenancies//terraform_files//_
Navigate to the above path and execute the terraform commands:
_terraform init_
@@ -173,7 +173,7 @@ Follow the below steps to update NSGs.
1. Modify your excel sheet to update required data in the Tabs - NSGs.
-2. Execute the _setupOCI.py_ file with _non_gf_tenancy_ parameter value to _false_:
+2. Execute the _setupOCI.py_ file with _workflow_type_ parameter value to _create_resources_:
```python setUpOCI.py /cd3user/tenancies//_setUpOCI.properties```
@@ -181,7 +181,7 @@ Follow the below steps to update NSGs.
- Network Security Groups
- Add/Modify/Delete NSGs (Reads NSGs sheet)
- Once the execution is successful, _\_nsgs.auto.tfvars_ will be generated under the folder _/cd3user/tenancies//terraform_files/_. Existing files will move into respective backup folders.
+ Once the execution is successful, _\_nsgs.auto.tfvars_ will be generated under the folder _/cd3user/tenancies//terraform_files//_. Existing files will move into respective backup folders.
4. Navigate to the above path and execute the terraform commands:
_terraform init_
@@ -198,14 +198,14 @@ Follow the below steps to update VLANs.
1. Modify your excel sheet to update required data in the Tabs - SubnetsVLANs.
2. Make sure that the RouteRulesinOCI sheet and corresponing terraform is in synch with route rules in OCI console. If not, please follow procedure specified in [Sync manual changes done in OCI of Security Rules, Route Rules and DRG Route Rules with CD3 Excel Sheet and Terraform](#sync-manual-changes-done-in-oci-of-security-rules-route-rules-and-drg-route-rules-with-cd3-excel-sheet-and-terraform)
-3. Execute the _setupOCI.py_ file with _non_gf_tenancy_ parameter value to _false_:
+3. Execute the _setupOCI.py_ file with _workflow_type_ parameter value to _create_resources_:
```python setUpOCI.py /cd3user/tenancies//_setUpOCI.properties```
4. Choose _'Network'_ from the displayed menu. Choose below sub-option:
- Add/Modify/Delete VLANs (Reads SubnetsVLANs sheet)
- Once the execution is successful, _\_vlans.auto.tfvars_ will be generated under the folder _/cd3user/tenancies//terraform_files/_. Existing files will move into respective backup folders. _\routetables.auto.tfvars_ file will also be updated with the route table information specified for each VLAN.
+ Once the execution is successful, _\_vlans.auto.tfvars_ will be generated under the folder _/cd3user/tenancies//terraform_files//_. Existing files will move into respective backup folders. _\routetables.auto.tfvars_ file will also be updated with the route table information specified for each VLAN.
5. Navigate to the above path and execute the terraform commands:
_terraform init_
@@ -231,7 +231,7 @@ Remote VCN peering is the process of connecting two VCNs in different regions (b
diff --git a/cd3_automation_toolkit/documentation/user_guide/NetworkingScenariosNGF.md b/cd3_automation_toolkit/documentation/user_guide/NetworkingScenariosNGF.md
index 6a57bad91..f0387f7a2 100644
--- a/cd3_automation_toolkit/documentation/user_guide/NetworkingScenariosNGF.md
+++ b/cd3_automation_toolkit/documentation/user_guide/NetworkingScenariosNGF.md
@@ -1,6 +1,6 @@
# Networking Scenarios
-## Non-Greenfield Tenancies (Managing Network for Non Green Field Tenancies)
+## Managing Network for Non-Greenfield Workflow
- [Export Network](#non-greenfield-tenancies)
- [Add a new or modify the existing networking components](#add-a-new-or-modify-the-existing-networking-components)
@@ -14,7 +14,7 @@ Follow the below steps to export the Networking components that includes VCNs, S
1. Use the [CD3-Blank-Template.xlsx](/cd3_automation_toolkit/example) to export the networking resources into the Tabs - VCNs, DRGs, VCN Info, DHCP, Subnets, NSGs, RouteRulesInOCI, SecRulesInOCI,DRGRouteRulesInOCI tabs.
-2. Execute the _setupOCI.py_ file with _non_gf_tenancy_ parameter value to _true_:
+2. Execute the _setupOCI.py_ file with _workflow_type_ parameter value to _export_resources_:
```python setUpOCI.py /cd3user/tenancies//_setUpOCI.properties```
@@ -27,7 +27,7 @@ Follow the below steps to export the Networking components that includes VCNs, S
- Export Network components for SubnetsVLANs Tab
- Export Network components for NSGs Tab
- Once the execution is successful, networking related .tfvars files and .sh files containing import statements will be generated under the folder _/cd3user/tenancies//terraform_files/_
+ Once the execution is successful, networking related .tfvars files and .sh files containing import statements will be generated under the folder _/cd3user/tenancies//terraform_files//_
Also,The RPC related .tfvars and .sh files containing import statements will be generated in global directory which is inside the /cd3user/tenancies//terraform_files/ folder.
@@ -55,8 +55,8 @@ Subnets tab:
[Go back to Networking Scenarios](#networking-scenarios)
### Add a new or modify the existing networking components
-1. Export the Networking components by following the steps [above](#1-export-network). (Note that here _non\_gf\_tenancy_ flag is set to true)
-2. Follow the [process](/cd3_automation_toolkit/documentation/user_guide/NetworkingScenariosGF.md#modify-network) to add new components such as VCN/DHCP/DRG/IGW/NGW/SGW/LPG/Subnet etc. (Note that here _non\_gf\_tenancy_ flag is set to false)
+1. Export the Networking components by following the steps [above](#export-network). (Note that here _workflow_type_ flag is set to export_resources)
+2. Follow the [process](/cd3_automation_toolkit/documentation/user_guide/NetworkingScenariosGF.md#modify-network) to add new components such as VCN/DHCP/DRG/IGW/NGW/SGW/LPG/Subnet etc. (Note that here _workflow_type_ flag is set to create_resources)
[Go back to Networking Scenarios](#networking-scenarios)
@@ -64,7 +64,7 @@ Subnets tab:
diff --git a/cd3_automation_toolkit/documentation/user_guide/NonGreenField-Jenkins.md b/cd3_automation_toolkit/documentation/user_guide/NonGreenField-Jenkins.md
new file mode 100644
index 000000000..818a17a96
--- /dev/null
+++ b/cd3_automation_toolkit/documentation/user_guide/NonGreenField-Jenkins.md
@@ -0,0 +1,75 @@
+# Export Resources from OCI via Jenkins(Non-Greenfield Workflow)
+
+
+**Step 1**:
+ Choose the appropriate CD3 Excel sheet template from [Excel Templates](/cd3_automation_toolkit/documentation/user_guide/ExcelTemplates.md)
+Choose **CD3-Blank-template.xlsx** for an empty sheet.
+
+**Step 2**:
+ Login to Jenkins URL with user created after initialization and click on setUpOCI pipeline from Dashboard. Click on **Build with Parameters** from left side menu.
+
+
+
+>Note - Only one user at a time using the Jenkins setup is supported in the current release of the toolkit.
+
+**Step 3**:
+ Upload the above chosen Excel sheet in **Excel_Template** section.
+
+
+>This will copy the Excel file at `/cd3user/tenancies/` inside the container. It will also take backup of existing Excel on the container by appending the current datetime if same filename is uploaded in multiple executions.
+
+
+**Step 4:**
+ Select the workflow as **Export Resources from OCI**(Non-Greenfield Workflow). Choose single or multiple MainOptions as required and then corresponding SubOptions.
+ Below screenshot shows export of Network and Compute.
+
+
+
+
+**Step 5:**
+ Specify region and compartment from where you want to export the data.
+ It also asks for service specific filters like display name patterns for compute. Leave empty if no filter is needed.
+
+
+ Click on **Build** at the bottom.
+
+
+**Step 6:**
+ setUpOCI pipeline is triggered and stages are executed as shown below:
+
+
+
+
+**Expected Output of 'Execute setUpOCI' stage:**
+
+
Overwrites the specific tabs of Excel sheet with the exported resource details from OCI.
Executes shell scripts with import commands(tf_import_commands_<resource>_nonGF.sh) generated in the previous stage
+
+
+
+**Expected Output of Terraform Pipelines:**
+
+
Respective pipelines will get triggered automatically from setUpOCI pipeline based on the services chosen for export. You could also trigger manually when required.
+
If 'Run Import Commands' stage was successful (ie tf_import_commands_<resource>_nonGF.sh ran successfully for all services chosen for export), respective terraform pipelines triggered should have 'Terraform Plan' stage show as 'No Changes'
+
+
+
+
+
+> **Note:**
+> Once you have exported the required resources and imported into tfstate, you can use the toolkit to modify them or create new on top of them using 'Create Resources in OCI' workflow.
+
+
diff --git a/cd3_automation_toolkit/documentation/user_guide/NonGreenField.md b/cd3_automation_toolkit/documentation/user_guide/NonGreenField.md
index af9bf8c08..150207001 100644
--- a/cd3_automation_toolkit/documentation/user_guide/NonGreenField.md
+++ b/cd3_automation_toolkit/documentation/user_guide/NonGreenField.md
@@ -1,4 +1,4 @@
-# Non-Green Field Tenancies
+# Export Resources from OCI (Non-Greenfield Workflow)
> **Note**
@@ -7,17 +7,38 @@
> * Tool Kit then generates the TF configuration files/auto.tfvars files for these exported resources.
> * It also generates a shell script - tf_import_commands_``_nonGF.sh that has the import commands, to import the state of the resources to tfstate file.(This helps to manage the resources via Terraform in future).
-## Detailed Steps
-Below are the steps that will help to configure the Automation Tool Kit to support the Non - Green Field Tenancies:
**Step 1:**
- Chose the appropriate CD3 Excel sheet template from [Excel Templates](/cd3_automation_toolkit/documentation/user_guide/RunningAutomationToolkit.md#excel-sheet-templates)
+ Chose the appropriate CD3 Excel sheet template from [Excel Templates](/cd3_automation_toolkit/documentation/user_guide/ExcelTemplates.md)
**Step 2:**
Put CD3 Excel at the appropriate location.
- Modify/Review [setUpOCI.properties](/cd3_automation_toolkit/documentation/user_guide/RunningAutomationToolkit.md#setupociproperties) with **non_gf_tenancy** set to **true** as shown below:
-![image](https://user-images.githubusercontent.com/103508105/221798771-9bca7a1a-5ef3-4587-8138-97f65c4d7cf1.png)
+ Modify/Review _/cd3user/tenancies//\_setUpOCI.properties_ with **workflow_type** set to **export_resources** as shown below:
+```ini
+#Input variables required to run setUpOCI script
+#path to output directory where terraform file will be generated. eg /cd3user/tenancies//terraform_files
+outdir=/cd3user/tenancies/demotenancy/terraform_files/
+
+#prefix for output terraform files eg like demotenancy
+prefix=demotenancy
+
+# auth mechanism for OCI APIs - api_key,instance_principal,session_token
+auth_mechanism=api_key
+
+#input config file for Python API communication with OCI eg /cd3user/tenancies//.config_files/_config;
+config_file=/cd3user/tenancies/demotenancy/.config_files/demotenancy_oci_config
+
+# Leave it blank if you want single outdir or specify outdir_structure_file.properties containing directory structure for OCI services.
+outdir_structure_file=/cd3user/tenancies/demotenancy/demotenancy_outdir_structure_file.properties
+
+#path to cd3 excel eg /cd3user/tenancies//CD3-Customer.xlsx
+cd3file=/cd3user/tenancies/demotenancy/CD3-Blank-template.xlsx
+
+#specify create_resources to create new resources in OCI(greenfield workflow)
+#specify export_resources to export resources from OCI(non-greenfield workflow)
+workflow_type=export_resources
+```
**Step 3:**
Execute the SetUpOCI.py script to start exporting the resources to CD3 and creating the terraform configuration files.
@@ -52,7 +73,7 @@ c. Shell Script with import commands - tf_import_commands_``_nonGF.sh
> **Note**
-> Once the export (including the execution of **tf_import_commands_``_nonGF.sh**) is complete, switch the value of **non_gf_tenancy** back to **false**.
+> Once the export (including the execution of **tf_import_commands_``_nonGF.sh**) is complete, switch the value of **workflow_type** back to **create_resources**.
> This allows the Tool Kit to support the tenancy as Green Field from this point onwards.
## Example - Export Identity
@@ -62,7 +83,7 @@ Follow the below steps to quickly export Identity components from OCI.
2. Edit the _setUpOCI.properties_ at location:_/cd3user/tenancies //\_setUpOCI.properties_ with appropriate values.
- Update the _cd3file_ parameter to specify the CD3 excel sheet path.
- - Set the _non_gf_tenancy_ parameter value to _true_. (for Non Greenfield Workflow.)
+ - Set the _workflow_type_ parameter value to _export_resources_. (for Non Greenfield Workflow.)
3. Change Directory to 'cd3_automation_toolkit' :
```cd /cd3user/oci_tools/cd3_automation_toolkit/```
diff --git a/cd3_automation_toolkit/documentation/user_guide/RunningAutomationToolkit.md b/cd3_automation_toolkit/documentation/user_guide/RunningAutomationToolkit.md
deleted file mode 100644
index 6780a6db8..000000000
--- a/cd3_automation_toolkit/documentation/user_guide/RunningAutomationToolkit.md
+++ /dev/null
@@ -1,97 +0,0 @@
-# **Getting Started with Automation Toolkit**
-There are 2 main inputs to the Automation Toolkit.
-- CD3 Excel Sheet
-- setUpOCI.properties
-
-### **Excel Sheet Templates**
-
-Below are the CD3 templates for the latest release having standardised IAM Components (compartments, groups and policies), Network Components and Events & Notifications Rules as per CIS Foundations Benchmark for Oracle Cloud.
-
-Details on how to fill data into the excel sheet can be found in the Blue section of each sheet inside the excel file. Make appropriate changes to the templates eg region and use for deployment.
-
-|Excel Sheet| Purpose |
-|-----------|----------------------------------------------------------------------------------------------------------------------------|
-| [CD3-Blank-template.xlsx](/cd3_automation_toolkit/example) | Choose this template while exporting the existing resources from OCI into the CD3 and Terraform.|
-| [CD3-CIS-template.xlsx](/cd3_automation_toolkit/example) | This template has auto-filled in data of CIS Landing Zone for DRGv2. Choose this template to create Core OCI Objects (IAM, Tags, Networking, Instances, LBR, Storage, Databases) |
-|[CD3-HubSpoke-template](/cd3_automation_toolkit/example) | This template has auto-filled in data for a Hub and Spoke model of networking. Choose this template to create Core OCI Objects (IAM, Tags, Networking, Instances, LBR, Storage, Databases)|
-|[CD3-SingleVCN-template](/cd3_automation_toolkit/example)| This template has auto-filled in data for a Single VCN model of networking. Choose this template to create Core OCI Objects (IAM, Tags, Networking, Instances, LBR, Storage, Databases)|
-|[CD3-CIS-ManagementServices-template.xlsx](/cd3_automation_toolkit/example) | This template has auto-filled in data of CIS Landing Zone. Choose this template while creating the components of Events, Alarms, Notifications and Service Connectors|
-
-
-> The Excel Templates can also be found at _/cd3user/oci_tools/cd3_automation_toolkit/example_ inside the container.
-> After deploying the infra using any of the templates, please run [CIS compliance checker script](/cd3_automation_toolkit/documentation/user_guide/learn_more/CISFeatures.md#1-run-cis-compliance-checker-script))
-
-
-### **setUpOCI.properties**
-
-**Current Version: setUpOCI.properties v10.1**
-
-Make sure to use/modify the properties file at _/cd3user/tenancies //\_setUpOCI.properties_ during executions.
-
-```
-[Default]
-
-#Input variables required to run setUpOCI script
-
-#path to output directory where terraform file will be generated. eg /cd3user/tenancies//terraform_files
-outdir=
-
-#prefix for output terraform files eg like demotenancy
-prefix=
-
-#input config file for Python API communication with OCI eg /cd3user/tenancies//_config;
-config_file=
-
-#path to cd3 excel eg /cd3user/tenancies//CD3-Customer.xlsx
-cd3file=
-
-#Is it Non GreenField tenancy
-non_gf_tenancy=false
-
-# Leave it blank if you want single outdir or specify outdir_structure_file.properties containing directory structure for OCI services.
-outdir_structure_file=
-```
-
-| Variable | Description | Example |
-|---|---|---|
-|outdir|Path to output directory where terraform files will be generated| /cd3user/tenancies//terraform\_files|
-|prefix|Prefix for output terraform files|\|
-|config\_file|Python config file|/cd3user/tenancies//config|
-| cd3file |Path to the CD3 input file |/cd3user/tenancies//testCD3. xlsx |
-|non\_gf\_tenancy |Specify if its a Non Green field tenancy or not (**True** or **False**)| False|
-|outdir\_structure\_file |Parameter specifying single outdir or different for different services|Blank or _gc2_outdir_structure_file|
-
-
-
-### **Execution Steps Overview:**
-Choose the appropriate CD3 Excel Sheet and update the setUpOCI.properties file at _/cd3user/tenancies//\_setUpOCI.properties_ and run the commands below:
-
-**Step 1**:
- Change Directory to 'cd3_automation_toolkit'
- ```cd /cd3user/oci_tools/cd3_automation_toolkit/```
-
-**Step 2**:
- Place Excel sheet at appropriate location in your container and provide the corresponding path in _cd3file_ parmeter of: _/cd3user/tenancies //\_setUpOCI.properties_ file
-
-**Step 3**
-
-Execute the setUpOCI Script: ```python setUpOCI.py /cd3user/tenancies//_setUpOCI.properties```
- → Example execution of the script:
-
-```
-[cd3user@25260a87b137 cd3_automation_toolkit]$ python setUpOCI.py /cd3user/tenancies/demotenancy/demotenancy_setUpOCI.properties
-Updated OCI_Regions file !!!
-Script to fetch the compartment OCIDs into variables file has not been executed.
-Do you want to run it now? (y|n):
-```
-→ This prompt appears for the very first time when you run the toolkit or when any new compartments are created using the toolkit. Enter 'y' to fetch the details of compartment OCIDs into variables file.
- → After fetching the compartment details, the toolkit will display the menu options.
-
-
-
diff --git a/cd3_automation_toolkit/documentation/user_guide/Upgrade_Toolkit.md b/cd3_automation_toolkit/documentation/user_guide/Upgrade_Toolkit.md
index b163bc824..cde6eb19a 100644
--- a/cd3_automation_toolkit/documentation/user_guide/Upgrade_Toolkit.md
+++ b/cd3_automation_toolkit/documentation/user_guide/Upgrade_Toolkit.md
@@ -1,5 +1,11 @@
# Steps to Upgrade Your Toolkit (For Existing Customers using older versions):
+## Upgrade to Release v2024.1.0
+This is a major release with introduction of CI/CD using Jenkins.
+1. Follow the steps in [Launch Docker Container](/cd3_automation_toolkit/documentation/user_guide/Launch_Docker_container.md) to build new image with latest code and launch the container by specifying new path for to create a fresh outdir.
+2. Use Non Greenfield workflow to export the required OCI services into new excel sheet and the tfvars. Run terraform import commands also.
+3. Once terraform is in synch, Switch to Greenfield workflow and use for any future modifications to the infra.
+
## Upgrade to Release v12.1 from v12
1. Follow the steps in [Launch Docker Container](/cd3_automation_toolkit/documentation/user_guide/Launch_Docker_container.md) to build new image with latest code and launch the container by specifying same path for to keep using same outdir.
2. Copy sddc.tf from _/cd3user/oci_tools/cd3\_automation\_toolkit/user-scripts/terraform_files/_ to _/cd3user/tenancies//terraform\_files//_.
@@ -7,13 +13,13 @@
4. Copy the sddcs variable block from _/cd3user/oci_tools/cd3\_automation\_toolkit/user-scripts/terraform_files/variables_example.tf_ and replace it in your variables_\.tf file
## Upgrade to Release v12
-1. Follow the steps in Launch Docker Container to build new image with latest code and launch the container by specifying new path for to create a fresh outdir.
-2. Use Non Greenfield workflow to export the required OCI services into new excel sheet and the tfvars. Run terraform import commands also.
-3. Once terraform is in synch, Switch to Greenfield workflow and use for any future modifications to the infra.
+
## Upgrade to Release v11.1 from v11
-1. Follow the steps in [Launch Docker Container](/cd3_automation_toolkit/documentation/user_guide/Launch_Docker_container.md) to build new image with latest code and launch the container by specifying same path for to keep using same outdir.
+1. Follow the steps in [Launch Docker Container](/cd3_automation_toolkit/documentation/user_guide/Launch_Docker_container.md) to build new image with latest code and launch the container by specifying same path for to keep using same 1. Follow the steps in Launch Docker Container to build new image with latest code and launch the container by specifying new path for to create a fresh outdir.
+2. Use Non Greenfield workflow to export the required OCI services into new excel sheet and the tfvars. Run terraform import commands also.
+3. Once terraform is in synch, Switch to Greenfield workflow and use for any future modifications to the infra.outdir.
## Upgrade to Release v11
1. Follow the steps in [Launch Docker Container](/cd3_automation_toolkit/documentation/user_guide/Launch_Docker_container.md) to build new image with latest code and launch the container by specifying new path for to create a fresh outdir.
diff --git a/cd3_automation_toolkit/documentation/user_guide/Workflows-jenkins.md b/cd3_automation_toolkit/documentation/user_guide/Workflows-jenkins.md
new file mode 100644
index 000000000..68b0ae801
--- /dev/null
+++ b/cd3_automation_toolkit/documentation/user_guide/Workflows-jenkins.md
@@ -0,0 +1,34 @@
+# Using the Automation Toolkit via Jenkins
+
+Jenkins integraton with the toolkit is provided to jump start your journey with CI/CD for IaC in OCI. A beginner level of understanding of Jenkins is required.
+
+## **Pre-reqs for Jenkins Configuration**
+* The configurations are done when you execute createTenancyConfig.py in [Connect container to OCI Tenancy](/cd3_automation_toolkit/documentation/user_guide/Connect_container_to_OCI_Tenancy.md). Please validate them:
+ - jenkins.properties file is created under _/cd3user/tenancies/jenkins\_home_ as per input parameters in tenancyConfig.properties
+ - An Object Storage bucket is created in OCI in the specified compartment to manage tfstate remotely.
+ - Customer Secret Key is configured for the user for S3 credentials of the bucket.
+ - A DevOps Project, Repo and Topic are created in OCI in the specified compartment to store terraform_files. GIT is configured on the container with config file at ```/cd3user/.ssh/config```
+
+
+## **Bootstrapping of Jenkins in the toolkit**
+
+* Execute below cmd to start Jenkins -
+```/usr/share/jenkins/jenkins.sh &```
+
+* Access Jenkins URL using -
+ - https://\:\/
+ > Notes:
+ > - \ is the port mapped with local system while docker container creation Eg: 8443.
+ > - Network Connectivity should be allowed on this host and port.
+ - It will prompt you to create first user to access Jenkins URL. This will be the admin user.
+ - The Automation Toolkit only supports a single user Jenkins setup in this release.
+ - Once you login, Jenkins Dashbord will be displayed.
+
+
+
diff --git a/cd3_automation_toolkit/documentation/user_guide/Workflows.md b/cd3_automation_toolkit/documentation/user_guide/Workflows.md
index 185323cba..138a22ff6 100644
--- a/cd3_automation_toolkit/documentation/user_guide/Workflows.md
+++ b/cd3_automation_toolkit/documentation/user_guide/Workflows.md
@@ -1,15 +1,86 @@
-# Using the Automation Toolkit
+# Using the Automation Toolkit via CLI
+
+### **Prepare setUpOCI.properties**
+**Current Version: setUpOCI.properties v2024.1.0**
+
+Make sure to use/modify the properties file at _/cd3user/tenancies //\_setUpOCI.properties_ during executions.
+
+```ini
+[Default]
+
+#Input variables required to run setUpOCI script
+
+#path to output directory where terraform file will be generated. eg /cd3user/tenancies//terraform_files
+outdir=
+
+#prefix for output terraform files eg like demotenancy
+prefix=
+
+# auth mechanism for OCI APIs - api_key,instance_principal,session_token
+auth_mechanism=
+
+#input config file for Python API communication with OCI eg /cd3user/tenancies//.config_files/_config;
+config_file=
+
+# Leave it blank if you want single outdir or specify outdir_structure_file.properties containing directory structure for OCI services.
+outdir_structure_file=
+
+#path to cd3 excel eg /cd3user/tenancies/\CD3-Customer.xlsx
+cd3file=
+
+#specify create_resources to create new resources in OCI(greenfield workflow)
+#specify export_resources to export resources from OCI(non-greenfield workflow)
+workflow_type=create_resources
+```
+
+| Variable | Description | Example |
+|---|---|---|
+|outdir|Path to output directory where terraform files will be generated| /cd3user/tenancies//terraform\_files|
+|prefix|Prefix for output terraform files|\|
+|auth_mechanism|Authentication Mechanism for OCI APIs|api_key|
+|config\_file|Python config file|/cd3user/tenancies//.config_files/_config|
+|outdir\_structure\_file |Parameter specifying single outdir or different for different services|Blank or _outdir_structure_file.properties|
+| cd3file |Path to the Excel input file |/cd3user/tenancies//testCD3. xlsx |
+|workflow\_type |greenfield workflow or non-greenfield workflow| See Automation Toolkit Workflows for more information|
+
+
+### **Automation Toolkit Workflows:**
CD3 Automation Tool Kit supports 2 main workflows:
-1. Green Field Tenancies - Empty OCI tenancy (or) do not need to modify / use any existing resources.
-2. Non Green Field Tenancies - Need to use / manage existing resources. Export existing resources into CD3 & TF State, then use the Greenfield workflow.
+1. Create Resources in OCI (Greenfield Workflow) - Empty OCI tenancy (or) do not need to modify / use any existing resources.
+2. Export Resources from OCI (Non-Greenfield Workflow) - Need to use / manage existing resources. Export existing resources into CD3 & TF State, then use the Greenfield workflow.
+
+
+
+### **Execution Steps Overview:**
+Choose the appropriate CD3 Excel Sheet and update the setUpOCI.properties file at _/cd3user/tenancies//\_setUpOCI.properties_ and run the commands below:
+
+**Step 1**:
+ Change Directory to 'cd3_automation_toolkit'
+ ```cd /cd3user/oci_tools/cd3_automation_toolkit/```
+
+**Step 2**:
+ Place Excel sheet at appropriate location in your container and provide the corresponding path in _cd3file_ parmeter of: _/cd3user/tenancies //\_setUpOCI.properties_ file
+
+**Step 3**
+
+Execute the setUpOCI Script: ```python setUpOCI.py /cd3user/tenancies//_setUpOCI.properties```
+ → Example execution of the script:
+
+```
+[cd3user@25260a87b137 cd3_automation_toolkit]$ python setUpOCI.py /cd3user/tenancies/demotenancy/demotenancy_setUpOCI.properties
+Updated OCI_Regions file !!!
+Script to fetch the compartment OCIDs into variables file has not been executed.
+Do you want to run it now? (y|n):
+```
+→ This prompt appears for the very first time when you run the toolkit or when any new compartments are created using the toolkit. Enter 'y' to fetch the details of compartment OCIDs into variables file.
+ → After fetching the compartment details, the toolkit will display the menu options.
-
diff --git a/cd3_automation_toolkit/documentation/user_guide/cli_jenkins.md b/cd3_automation_toolkit/documentation/user_guide/cli_jenkins.md
new file mode 100644
index 000000000..7b6a1ad01
--- /dev/null
+++ b/cd3_automation_toolkit/documentation/user_guide/cli_jenkins.md
@@ -0,0 +1,35 @@
+# Switch between using the toolkit via CLI and Jenkins UI
+
+> **Note -**
+ >***It is recommended to stick to using the toolkit either via CLI or via Jenkins.***
+
+There can be scenarios when you need to update the **terraform_files** folder manually via CLI. Below are some examples:
+
+- You executed setUpOCI script to generate tfvars for some resources via CLI.
+- You updated **variables_\.tf** file to update image OCID or SSH Key for Compute or Database etc.
+
+Please folow below steps to sync local terraform_files folder to OCI DevOps GIT Repo:
+
+- ```cd /cd3user/tenancies//terraform_files```
+- ```git status```
+ Below screenshot shows changes in variables_phoenix.tf file under phoenix/compute folder.
+
+
+
+- ```git add -A .```
+
+- ```git commit -m "msg"```
+
+
+
+- ```git push```
+
+
+
+
diff --git a/cd3_automation_toolkit/documentation/user_guide/learn_more/CD3ExcelTabs.md b/cd3_automation_toolkit/documentation/user_guide/learn_more/CD3ExcelTabs.md
index f97ebb4ae..f9cfe270b 100644
--- a/cd3_automation_toolkit/documentation/user_guide/learn_more/CD3ExcelTabs.md
+++ b/cd3_automation_toolkit/documentation/user_guide/learn_more/CD3ExcelTabs.md
@@ -108,22 +108,22 @@ Click on the links below to know about the specifics of each tab in the excel sh
#### Developer Services
- - [OKE](https://github.com/oracle-devrel/cd3-automation-toolkit/blob/develop/cd3_automation_toolkit/documentation/user_guide/learn_more/Tabs.md#oke-tab)
+ - [OKE](/cd3_automation_toolkit/documentation/user_guide/learn_more/Tabs.md#oke-tab)
Click here to view sample auto.tfvars for OKE components- Clusters, Nodepools
#### Logging Services
- - [VCN Flow Logs](https://github.com/oracle-devrel/cd3-automation-toolkit/blob/develop/cd3_automation_toolkit/documentation/user_guide/learn_more/Tabs.md#vcn-flow-logs)
- - [LBaaS Logs](https://github.com/oracle-devrel/cd3-automation-toolkit/blob/develop/cd3_automation_toolkit/documentation/user_guide/learn_more/Tabs.md#lbaas-logs)
-- [OSS Logs](https://github.com/oracle-devrel/cd3-automation-toolkit/blob/develop/cd3_automation_toolkit/documentation/user_guide/learn_more/Tabs.md#oss-logs)
+ - [VCN Flow Logs](/cd3_automation_toolkit/documentation/user_guide/learn_more/Tabs.md#vcn-flow-logs)
+ - [LBaaS Logs](/cd3_automation_toolkit/documentation/user_guide/learn_more/Tabs.md#lbaas-logs)
+- [OSS Logs](/cd3_automation_toolkit/documentation/user_guide/learn_more/Tabs.md#oss-logs)
Click here to view sample auto.tfvars for Logging components
#### SDDCs Tab
- - [OCVS](https://github.com/oracle-devrel/cd3-automation-toolkit/blob/develop/cd3_automation_toolkit/documentation/user_guide/learn_more/Tabs.md#sddcs-tab)
-
+ - [OCVS](/cd3_automation_toolkit/documentation/user_guide/learn_more/Tabs.md#sddcs-tab)
+
Click here to view sample auto.tfvars for OCVS
diff --git a/cd3_automation_toolkit/documentation/user_guide/learn_more/OPAForCompliance.md b/cd3_automation_toolkit/documentation/user_guide/learn_more/OPAForCompliance.md
index 87a43db80..722e16196 100755
--- a/cd3_automation_toolkit/documentation/user_guide/learn_more/OPAForCompliance.md
+++ b/cd3_automation_toolkit/documentation/user_guide/learn_more/OPAForCompliance.md
@@ -33,3 +33,11 @@ Alternatively, run the following command to evaluate just a sinle OPA rule say "
This command will analyze the "tfplan.json" input file against the policy and display the evaluation results with a user-friendly format.
+
+
diff --git a/cd3_automation_toolkit/documentation/user_guide/learn_more/ResourceManagerUpload.md b/cd3_automation_toolkit/documentation/user_guide/learn_more/ResourceManagerUpload.md
index 55b87ebca..4ce6e8408 100644
--- a/cd3_automation_toolkit/documentation/user_guide/learn_more/ResourceManagerUpload.md
+++ b/cd3_automation_toolkit/documentation/user_guide/learn_more/ResourceManagerUpload.md
@@ -3,8 +3,6 @@
This option will upload the created Terraform files & the tfstate (if present) to the OCI Resource Manager.
-On choosing **"Developer Services"** in the SetUpOCI menu, choose **"Upload current terraform files/state to Resource Manager"** sub-option to upload the terraform outdir into OCI Resource Manager.
-
When prompted, specify the Region to create/upload the terraform files to Resource Manager Stack. Multiple regions can be specified as comma separated values. Specify 'global' to upload RPC related components which reside in 'global' directory.
On the next prompt, enter the Compartment where the Stack should be created if it is for the first time. The toolkit will create a Stack for the region specified previously under the specified compartment. For global resources, stack will be created in the home region.
@@ -27,4 +25,15 @@ Sample Execution:
-
+
+
+
+
+> [!IMPORTANT]
+> If you are using remote state and upload the stack to OCI Resource Manager using Upload current terraform files/state to Resource Manager under Developer Services, then running terraform plan/apply from OCI Resource Manager will not work and show below error:
+>
+
+
+> You will have to remove backend.tf from the directory, bring the remote state into local and then re-upload the stack.
+On choosing **"Developer Services"** in the SetUpOCI menu, choose **"Upload current terraform files/state to Resource Manager"** sub-option to upload the terraform outdir into OCI Resource Manager.
+
diff --git a/cd3_automation_toolkit/documentation/user_guide/learn_more/Tabs.md b/cd3_automation_toolkit/documentation/user_guide/learn_more/Tabs.md
index 6d82b1d9d..21cbfb481 100644
--- a/cd3_automation_toolkit/documentation/user_guide/learn_more/Tabs.md
+++ b/cd3_automation_toolkit/documentation/user_guide/learn_more/Tabs.md
@@ -573,7 +573,10 @@ Note -
![image](https://user-images.githubusercontent.com/115973871/216242750-d84a79bf-5799-4e51-ba40-ca82a00d04aa.png)
- Also, When the target kind is **'notifications'** the value for formatted messages parameter is set to **'true'** as default. Its set to **'false'** only when the source is 'streaming'.
+
+- After executing tf_import_commands during export of service connectors, the terraform plan will show log-sources ordering as changes and it rearranges the order for log-sources for that service connector if source/target kind is logging. This can be ignored and you can proceed with terraform apply.
+ ![image](https://github.com/oracle-devrel/cd3-automation-toolkit/assets/103548537/1005724e-ac03-4b45-8e3d-480c8826d065)
## OKE Tab
@@ -678,6 +681,7 @@ Below TF file is created:
Use this tab to create OCVS in your tenancy.
>Note:
+>As of now the toolkit supports single cluster SDDC.
The column "SSH Key Var Name" accepts SSH key value directly or the name of variable declared in *variables.tf* under the **sddc_ssh_keys** variable containing the key value. Make sure to have an entry in variables_\.tf file with the name you enter in SSH Key Var Name field of the Excel sheet and put the value as SSH key value.
>For Eg: If you enter the SSH Key Var Name as **ssh_public_key**, make an entry in variables_\.tf file as shown below:
diff --git a/cd3_automation_toolkit/documentation/user_guide/multiple_options_GF-Jenkins.md b/cd3_automation_toolkit/documentation/user_guide/multiple_options_GF-Jenkins.md
new file mode 100644
index 000000000..09a788918
--- /dev/null
+++ b/cd3_automation_toolkit/documentation/user_guide/multiple_options_GF-Jenkins.md
@@ -0,0 +1,28 @@
+# Provisioning of multiple services together
+
+>***Note - For any service that needs Network details eg compute, database, loadbalancers ets, 'network' pipeline needs to be executed prior to launching that service pipeline.***
+
+Multiple options can be selected simultaneously while creating resources in OCI using setUpOCI pipeline . But if one of the services is dependent upon the availability of another service eg 'Network' (Create Network) and 'Compute' (Add Instances); In such scenarios, terraform-apply pipeline for compute will fail as shown in below screenshot (last stage in the pipeline) -
+![tuxpi com 1706871371](https://github.com/oracle-devrel/cd3-automation-toolkit/assets/103508105/959dea07-b569-4908-967c-d4d1efbafe04)
+
+
+* Clicking on 'Logs' for Stage: sanjose/compute and clicking on the pipeline will dispay below -
+
+![tuxpi com 1706871675](https://github.com/oracle-devrel/cd3-automation-toolkit/assets/103508105/65536e92-6612-4c6e-9d79-4a347a5cee9a)
+
+
+* Clicking on 'Logs' for Stage Terraform Plan displays -
+
+![tuxpi com 1706871787](https://github.com/oracle-devrel/cd3-automation-toolkit/assets/103508105/711e1687-690f-4cbd-8abc-3fd4da108f9f)
+
+- This is expected because pipeline for 'compute' expects network to be already existing in OCI to launch a new instance.
+- To resolve this, Proceed with terraform-apply pipeline for 'network' and once it is successfuly completed, trigger terraform-apply pipeline for 'compute' manually by clicking on 'Build Now' from left menu.
+
+![tuxpi com 1706871906](https://github.com/oracle-devrel/cd3-automation-toolkit/assets/103508105/c3b7adb9-183b-4b79-bf9e-d492b3a5f7aa)
+
+
+
+
+
+| :arrow_backward: Prev | Next :arrow_forward: |
+| :---- | -------: |
diff --git a/cd3_automation_toolkit/documentation/user_guide/remote_state.md b/cd3_automation_toolkit/documentation/user_guide/remote_state.md
new file mode 100644
index 000000000..f8862640d
--- /dev/null
+++ b/cd3_automation_toolkit/documentation/user_guide/remote_state.md
@@ -0,0 +1,44 @@
+# Store Terraform State into Object Storage Bucket
+
+> [!Caution]
+> When utilizing remote state and deploying the stack to OCI Resource Manager through the **Upload current terraform files/state to Resource Manager** option under **Developer Services**, attempting to execute terraform plan/apply directly from OCI Resource Manager may result in below error.
+>
+
+
+> This option is disabled while using the toolkit via Jenkins. While using it via CLI, you will have to remove backend.tf from the directory, bring the remote state into local and then upload the stack.
+
+
+* Toolkit provides the option to store terraform state file(tfstate) into Object Storage bucket.
+* This can be achieved by setting ```use_remote_state=yes``` under Advanced Parameters in ```tenancyconfig.properties``` file while executing ```createTenancyConfig.py```.
+* Upon setting above parameter the script will -
+ - create a versioning enabled bucket in OCI tenancy in the specified region(if you don't specify anything in ```remote_state_bucket_name``` parameter to use an existing bucket)
+ - create a new customer secret key for the user, and configure it as S3 credentials to access the bucket. Before executing the createTenancyConfig.py script, ensure that the specified user in the DevOps User Details or identified by the user OCID does not already have the maximum limit of two customer secret keys assigned.
+
+* backend.tf file that gets generated -
+
+ ```
+ terraform {
+ backend "s3" {
+ key = "//terraform.tfstate"
+ bucket = "-automation-toolkit-bucket"
+ region = ""
+ endpoint = "https://.compat.objectstorage..oraclecloud.com"
+ shared_credentials_file = "/cd3user/tenancies//.config_files/_s3_credentials"
+ skip_region_validation = true
+ skip_credentials_validation = true
+ skip_metadata_api_check = true
+ force_path_style = true
+ }
+ }
+ ```
+
+* For single outdir, tfstate for all subscribed regions will be stored as ```\terraform.tfstate``` eg ```london\terraform.tfstate``` for london ```phoenix\terraform.tfstate``` for phoenix.
+* For multi outdir, tfstate for all services in all subscribed regions will be stored as ```\\terraform.tfstate``` eg ```london\tagging\terraform.tfstate``` for tagging dir in london region
+
+