-
-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactored config path retrieval, removed deprecated files, and impro… #211
base: main
Are you sure you want to change the base?
Conversation
…ved readability Reworked how Get.py gets the path of the config.ini (for Flag.py) - Now it gets it perfectly using proper logic and not depending on execution script path, Removed init files for IDEA Removed deprecated feature _generate_data.py Minor Updates for FileManagement.py and config.ini for readability purposes Allowing _test_gpu_acceleration.py to be imported by function check_gpu(), this although is experimental and is still planned to be removed in the future! Signed-off-by: Shahm Najeeb <[email protected]>
Implemented a function `get_network_info()` that collects various network statistics using the psutil library. Network data is saved to files including interface stats, connections, interfaces' IP addresses, and bandwidth usage. Executed external network commands (e.g., `ipconfig`) and saved output. Added process information for network connections with corresponding process names and PIDs. Included logging for debugging and error handling to track the process flow. Used a custom save function to store collected data in the "network_data" directory. Delayed dump_memory.py TODO to v3.4.1 Signed-off-by: Shahm Najeeb <[email protected]>
Implemented basic caching to speed it up Signed-off-by: Shahm Najeeb <[email protected]>
Signed-off-by: Shahm Najeeb <[email protected]>
…a reminder for me :>
WalkthroughA few files have been removed and several function signatures and version numbers have been updated. The CSV editor configuration and deprecated data generator have been deleted. In some modules, print statements were replaced with return values, and type hints plus error handling were enhanced. The configuration file now lists a new network module, and minor task updates were made in the plans. Additionally, a new network info module was added along with caching improvements in the vulnerability scan functionality. Changes
Sequence Diagram(s)sequenceDiagram
participant Caller
participant Vulnscan
participant Cache as Model/Vectorizer Cache
participant Disk
Caller->>Vulnscan: call load_model(model_path)
Vulnscan->>Cache: Check if model is cached
alt Model in cache
Cache-->>Vulnscan: Return cached model
else
Vulnscan->>Disk: Load model from disk
Disk-->>Vulnscan: Model loaded
Vulnscan->>Cache: Cache the loaded model
end
Vulnscan-->>Caller: Return model
sequenceDiagram
participant User
participant NetModule as network_psutil.py
participant PSUtil as psutil Library
participant FileSys as File System
User->>NetModule: call get_network_info()
NetModule->>PSUtil: Fetch network statistics
NetModule->>FileSys: Save network_io.txt and related files
NetModule->>NetModule: Execute external command (ipconfig)
NetModule-->>User: Complete execution
Suggested labels
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.
Actionable comments posted: 1
🧹 Nitpick comments (10)
CODE/logicytics/Get.py (1)
57-68
: Don't bail too fast withexit(1)
Right now, ifconfig.ini
isn’t found, we just print a message and quit. That works, but it might be nicer to raise an exception or use logging so upstream code can handle it.CODE/vulnscan.py (3)
39-70
: Awesome model-loading with a fallback
Your code checks for.pkl
,.safetensors
, and.pth
. The caching logic is super helpful. Just watch out for possible version mismatches in the future, but for now, it’s all good.
120-132
: Thread locks FTW
Syncing model loads helps avoid weird race conditions. Sweet design, but maybe add a quick comment in the code on why the locks are needed for people new to concurrency.
233-233
: Heads up about memory
Telling users that loading files can be slow or huge is helpful. Maybe also mention possible memory-based exceptions if things get too wild.CODE/network_psutil.py (3)
13-16
: Add error handling to catch file write issues.What if the disk is full or you don't have write permissions? Let's catch those errors!
def __save_data(filename: str, data: str, father_dir_name: str = "network_data"): os.makedirs(father_dir_name, exist_ok=True) - with open(os.path.join(father_dir_name, filename), "w") as f: - f.write(data) + try: + with open(os.path.join(father_dir_name, filename), "w") as f: + f.write(data) + except IOError as e: + log.error(f"Failed to save {filename}: {e}")
78-85
: The bandwidth measurement could be more accurate.Using
sleep(1)
for bandwidth measurement isn't super reliable because:
- It only measures for 1 second (too short!)
- Other processes might affect the timing
- Network spikes could give weird results
log.debug("Measuring network bandwidth usage...") -net1 = psutil.net_io_counters() -time.sleep(1) -net2 = psutil.net_io_counters() -bandwidth_data = f"Upload Speed: {(net2.bytes_sent - net1.bytes_sent) / 1024} KB/s\n" -bandwidth_data += f"Download Speed: {(net2.bytes_recv - net1.bytes_recv) / 1024} KB/s\n" +# Take multiple samples over 5 seconds +samples = [] +for _ in range(5): + net1 = psutil.net_io_counters() + time.sleep(1) + net2 = psutil.net_io_counters() + samples.append({ + 'up': (net2.bytes_sent - net1.bytes_sent) / 1024, + 'down': (net2.bytes_recv - net1.bytes_recv) / 1024 + }) + +# Calculate average speeds +avg_up = sum(s['up'] for s in samples) / len(samples) +avg_down = sum(s['down'] for s in samples) / len(samples) +bandwidth_data = f"Average Upload Speed: {avg_up:.2f} KB/s\n" +bandwidth_data += f"Average Download Speed: {avg_down:.2f} KB/s\n"
87-91
: Add IPv6 support and error handling for DNS resolution.The current code only gets IPv4 addresses and might break if DNS resolution fails.
-hostname = socket.gethostname() -ip_address = socket.gethostbyname(hostname) -ip_config_data = f"Hostname: {hostname}\nIP Address: {ip_address}\n" +try: + hostname = socket.gethostname() + # Get both IPv4 and IPv6 addresses + ip_addresses = [] + for res in socket.getaddrinfo(hostname, None): + ip = res[4][0] + if ip not in ip_addresses: # Avoid duplicates + ip_addresses.append(ip) + + ip_config_data = f"Hostname: {hostname}\n" + ip_config_data += "IP Addresses:\n" + for ip in ip_addresses: + ip_config_data += f" - {ip}\n" +except socket.gaierror as e: + log.error(f"Failed to resolve hostname: {e}") + ip_config_data = f"Hostname: {hostname}\nFailed to resolve IP addresses\n"CODE/logicytics/FileManagement.py (2)
130-147
: Make error handling more specific.You're catching all exceptions with a generic handler. Let's be more specific about what can go wrong!
- except Exception as e: - return f"Error: {e}" + except PermissionError as e: + return f"Permission denied: {e}" + except FileNotFoundError as e: + return f"File not found: {e}" + except Exception as e: + return f"Unexpected error: {e}"
206-207
: Use a more descriptive variable name than 'time'.'time' is a bit vague. Let's make it clearer what this represents!
-time = datetime.now().strftime('%Y-%m-%d_%H-%M-%S') -filename = f"Logicytics_{name}_{flag}_{time}" +timestamp = datetime.now().strftime('%Y-%m-%d_%H-%M-%S') +filename = f"Logicytics_{name}_{flag}_{timestamp}"CODE/config.ini (1)
22-22
: That's one looong line! Let's make it easier to read.The files list is getting super long. Would be easier to read if we split it into multiple lines!
-files = "bluetooth_details.py, bluetooth_logger.py, browser_miner.ps1, cmd_commands.py, config.ini, dir_list.py, dump_memory.py, event_log.py, Logicytics.py, log_miner.py, media_backup.py, netadapter.ps1, network_psutil.py, packet_sniffer.py, property_scraper.ps1, registry.py, sensitive_data_miner.py, ssh_miner.py, sys_internal.py, tasklist.py, tree.ps1, vulnscan.py, wifi_stealer.py, window_feature_miner.ps1, wmic.py, _debug.py, _dev.py, logicytics\Checks.py, logicytics\Execute.py, logicytics\FileManagement.py, logicytics\Flag.py, logicytics\Get.py, logicytics\Logger.py, logicytics\User_History.json.gz, logicytics\__init__.py, SysInternal_Suite\.sys.ignore, SysInternal_Suite\SysInternal_Suite.zip, VulnScan\Model SenseMini .3n3.pth, VulnScan\README.md, VulnScan\Vectorizer .3n3.pkl, VulnScan\tools\_study_network.py, VulnScan\tools\_test_gpu_acceleration.py, VulnScan\tools\_vectorizer.py, VulnScan\v3\_generate_data.py, VulnScan\v3\_train.py" +files = """ + bluetooth_details.py, + bluetooth_logger.py, + browser_miner.ps1, + cmd_commands.py, + config.ini, + # ... (continue with one file per line) + VulnScan\v3\_train.py +"""
🛑 Comments failed to post (1)
CODE/network_psutil.py (1)
9-10:
⚠️ Potential issueHey! Move the log initialization outside the
if __name__ == "__main__"
block.The log initialization should be at the module level since it's used by the
get_network_info
function. Right now, it'll raise a NameError if someone imports and uses the function.-if __name__ == "__main__": - log = Log({"log_level": DEBUG}) +log = Log({"log_level": DEBUG})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.log = Log({"log_level": DEBUG})
…gleton pattern across multiple scripts Minor changes in some scripts to make it simpler and way more DRY Made log in __init__.py rather than being forced to be imported, this thus simplifies development, while also still allowing more experienced people set their own log, although variable MUST be different named for dev purposes, also utilised a __new__ check to make sure the log var is only init once!! Also, the reset method makes sure to reset previous configurations Deleted a path that is not needed in browser_miner.ps1 Logger.py sets bold_red rather than red for critical errors Logic fix: _debug.py now only shows "All required files are present." if no extra/missing files Signed-off-by: Shahm Najeeb <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.
Actionable comments posted: 3
🔭 Outside diff range comments (7)
CODE/registry.py (1)
23-23
:⚠️ Potential issueOops! There's a bug in the error handler! 🐛
The
result
variable is used in the error message on line 23, but ifsubprocess.run()
fails,result
won't exist yet. This could cause another error to pop up and hide the real problem!Here's how to fix it:
except subprocess.CalledProcessError as e: - log.error(f"Failed to back up the registry: {e}. More details: {result.stderr}") + log.error(f"Failed to back up the registry: {e}. More details: {e.stderr}")CODE/ssh_miner.py (1)
37-41
: 🛠️ Refactor suggestionHey, we should probably add some security checks here!
When dealing with SSH keys (which are super sensitive!), we should:
- Check if the destination directory has safe permissions
- Set restrictive permissions on the backup files
- Maybe encrypt the backups?
Here's how we can make it safer:
try: + # Check destination directory permissions + if os.path.exists(destination_dir): + dir_stat = os.stat(destination_dir) + if dir_stat.st_mode & 0o077: # Check if others have any access + log.warning("Destination directory has unsafe permissions") + shutil.copytree(source_dir, destination_dir) + # Set restrictive permissions on backup + os.chmod(destination_dir, 0o700) # Only owner can access log.info("SSH keys and configuration backed up successfully.")CODE/log_miner.py (1)
31-31
:⚠️ Potential issueWatch out! This could be dangerous! 🚨
The way we're building the PowerShell command could let bad stuff happen if someone messes with the log_type or backup_file variables.
Let's make it safer:
- cmd = f'Get-EventLog -LogName "{log_type}" | Export-Csv -Path "{backup_file}" -NoTypeInformation' + # Validate log_type + valid_log_types = ["System", "Application", "Security"] + if log_type not in valid_log_types: + raise ValueError(f"Invalid log type. Must be one of {valid_log_types}") + + # Validate backup_file path + if not backup_file.endswith('.csv') or '/' in backup_file or '\\' in backup_file: + raise ValueError("Invalid backup file name") + + cmd = f'Get-EventLog -LogName "{log_type}" | Export-Csv -Path "{backup_file}" -NoTypeInformation'CODE/sys_internal.py (1)
32-37
:⚠️ Potential issueHold up! We need to be more careful with these commands! 🛡️
Running executables without checking if they exist or validating their paths could be risky.
Here's how to make it safer:
+ # Validate executable path + exe_path = os.path.join('SysInternal_Suite', executable) + if not os.path.isfile(exe_path): + raise FileNotFoundError(f"Executable not found: {exe_path}") + + # Validate executable file permissions + if os.access(exe_path, os.X_OK): - command = f"{os.path.join('SysInternal_Suite', executable)}" + command = f"{exe_path}" + else: + raise PermissionError(f"No execute permission for: {exe_path}") # Execute the command and capture the output result = subprocess.run( - command, stdout=subprocess.PIPE, stderr=subprocess.PIPE + command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, + shell=False # Safer execution without shell )CODE/vulnscan.py (1)
389-398
: 🛠️ Refactor suggestionLet's fix that FIXME! 🛠️
The finally block could use some love to make sure we clean up properly.
Here's a better way to handle cleanup:
if __name__ == "__main__": try: packet_sniffer() except Exception as e: log.error(e) finally: - if G: + # Clean up resources + try: + if G: + plt.close('all') # Close all figures + model_cache.clear() # Clear model cache + vectorizer_cache.clear() # Clear vectorizer cache + except Exception as e: + log.error(f"Error during cleanup: {e}") - plt.close()CODE/packet_sniffer.py (1)
389-398
: 🛠️ Refactor suggestionAnother finally block to fix! 🔧
Just like in vulnscan.py, we should make the cleanup more robust.
Here's how to make it better:
if __name__ == "__main__": try: packet_sniffer() except Exception as e: log.error(e) finally: - if G: + try: + if G: + plt.close('all') # Close all figures + # Clean up any open network resources + conf.verb = 0 # Reset verbosity + except Exception as e: + log.error(f"Error during cleanup: {e}") - plt.close()CODE/Logicytics.py (1)
396-399
:⚠️ Potential issueLet's make those system commands safer! 🛡️
Using
subprocess.call
with a string command can be risky, even withshell=False
. We should split the command into a list.Here's the safer way:
if SUB_ACTION == "shutdown": - subprocess.call("shutdown /s /t 3", shell=False) + subprocess.call(["shutdown", "/s", "/t", "3"], shell=False) elif SUB_ACTION == "reboot": - subprocess.call("shutdown /r /t 3", shell=False) + subprocess.call(["shutdown", "/r", "/t", "3"], shell=False)
🧹 Nitpick comments (13)
CODE/logicytics/__init__.py (1)
19-19
: Hey! Let's make this line super simple! 🎯Instead of writing it the long way with
True if ... else False
, you can just compare directly! It's like saying "yes" or "no" instead of "if it's yes then yes else no" 😄Here's the shorter way to write it:
-__show_trace = True if DEBUG == "DEBUG" else False +__show_trace = DEBUG == "DEBUG"🧰 Tools
🪛 Ruff (0.8.2)
19-19: Remove unnecessary
True if ... else False
Remove unnecessary
True if ... else False
(SIM210)
CODE/cmd_commands.py (1)
20-21
: Let's make the file handling safer! 🔒Using
with
would be better here to make sure the file gets closed properly, even if something goes wrong.Here's a safer way to write to the file:
output = Execute.command(commands) - open(file, "w", encoding=encoding).write(output) + with open(file, "w", encoding=encoding) as f: + f.write(output)🧰 Tools
🪛 Ruff (0.8.2)
21-21: Use a context manager for opening files
(SIM115)
CODE/wmic.py (2)
19-20
: Let's handle the WMIC.html file more safely! 🔒Just like before, using
with
would be better here to make sure the file gets closed properly.Here's how to fix it:
data = Execute.command("wmic BIOS get Manufacturer,Name,Version /format:htable") - open("WMIC.html", "w").write(data) + with open("WMIC.html", "w") as f: + f.write(data)🧰 Tools
🪛 Ruff (0.8.2)
20-20: Use a context manager for opening files
(SIM115)
28-33
: Make the loop more Pythonic! 🐍Instead of using
range(len())
, we can useenumerate()
to make the code cleaner and more readable.Here's a better way to write the loop:
with open("wmic_output.txt", "w") as file: - for i in range(len(wmic_commands)): - log.info(f"Executing Command Number {i + 1}: {wmic_commands[i]}") - output = Execute.command(wmic_commands[i]) - file.write("-" * 190) - file.write(f"Command {i + 1}: {wmic_commands[i]}\n") - file.write(output) + for i, command in enumerate(wmic_commands, 1): + log.info(f"Executing Command Number {i}: {command}") + output = Execute.command(command) + file.write("-" * 190) + file.write(f"Command {i}: {command}\n") + file.write(output)CODE/dir_list.py (1)
57-61
: Let's make this run faster and safer! 🚀The thread pool setup could be smarter and we should add some safety checks.
Here's how we can improve it:
+ # Calculate optimal thread count based on system and workload + optimal_workers = min(32, (os.cpu_count() or 1) * 2) + + # Validate directory paths + if not os.path.exists(base_directory): + raise ValueError(f"Base directory does not exist: {base_directory}") + - with ThreadPoolExecutor(max_workers=min(32, os.cpu_count() * 4)) as executor: + with ThreadPoolExecutor(max_workers=optimal_workers) as executor: subdirectories = [os.path.join(base_directory, d) for d in os.listdir(base_directory) if os.path.isdir(os.path.join(base_directory, d))] + # Add a limit to prevent overwhelming the system + if len(subdirectories) > 1000: + log.warning(f"Large directory count ({len(subdirectories)}), this might take a while") futures = [executor.submit(run_command_threaded, subdir, file, message, encoding) for subdir inCODE/network_psutil.py (1)
16-16
: Hey, let's tackle that TODO! 🛠️Breaking up this big function would make it easier to understand and maintain.
Want me to help split this into smaller helper functions? I can show you how to break it down!
🧰 Tools
🪛 GitHub Check: CodeFactor
[notice] 16-16: CODE/network_psutil.py#L16
Unresolved comment '# TODO: Split me up into helper functions'. (C100)CODE/bluetooth_details.py (1)
52-56
: Hey! Let's make this PowerShell command easier to read! 🔧The command string is a bit squished together. We can make it more readable by breaking it into smaller pieces.
Here's a cleaner way to write it:
- command = ( - "Get-PnpDevice | Where-Object { $_.FriendlyName -like '*Bluetooth*' } | " - "Select-Object FriendlyName, DeviceID, Description, Manufacturer, Status, PnpDeviceID | " - "ConvertTo-Json -Depth 3" - ) + command = [ + "Get-PnpDevice", + "| Where-Object { $_.FriendlyName -like '*Bluetooth*' }", + "| Select-Object FriendlyName, DeviceID, Description, Manufacturer, Status, PnpDeviceID", + "| ConvertTo-Json -Depth 3" + ] + command = " ".join(command)CODE/sensitive_data_miner.py (1)
87-89
: Let's make the file size limit adjustable! 🔧Having a fixed 10MB limit might not work for everyone. It would be better to make this configurable!
Here's how we can make it better:
+ # At the top of the file, after imports + DEFAULT_MAX_FILE_SIZE = 10_000_000 # 10MB in bytes + + @classmethod + def set_max_file_size(cls, size_in_mb: int): + """Set the maximum file size limit in megabytes.""" + cls.max_file_size = size_in_mb * 1_000_000 + @staticmethod def __copy_file(src_file_path: Path, dst_file_path: Path): - if src_file_path.stat().st_size > 10_000_000: # 10MB limit + if src_file_path.stat().st_size > getattr(Mine, 'max_file_size', DEFAULT_MAX_FILE_SIZE): log.warning("File exceeds size limit") returnCODE/bluetooth_logger.py (1)
104-136
: Let's make our error messages more helpful! 🎯When something goes wrong with getting Bluetooth devices, it would be nice to know exactly what happened.
Here's how we can make the error messages more helpful:
devices = parse_output( output, regex=r"^(?P<Name>.+?)\s+(?P<DeviceID>.+)$", group_names=["Name", "DeviceID"] ) + if not devices: + log.warning("No Bluetooth devices found. Make sure Bluetooth is turned on and devices are paired.") # Extract MAC addresses for device in devices: mac_match = re.search(r"BLUETOOTHDEVICE_(?P<MAC>[A-F0-9]{12})", device["DeviceID"], re.IGNORECASE) - device["MAC"] = mac_match.group("MAC") if mac_match else "Address Not Found" + if not mac_match: + log.warning(f"Could not extract MAC address from device ID: {device['DeviceID']}") + device["MAC"] = "Address Not Found" + else: + device["MAC"] = mac_match.group("MAC")CODE/vulnscan.py (2)
20-30
: Hey! Let's talk about memory management! 🧠The caching of models and vectorizers is super cool for speed, but these ML models can be pretty chunky! You might wanna add a warning about memory usage or maybe even a cache size limit.
Consider adding a max cache size like this:
# Caching dictionaries for model and vectorizer model_cache = {} vectorizer_cache = {} +MAX_CACHE_SIZE = 5 # Maximum number of models/vectorizers to keep in cache
33-66
: Let's make that cache even smarter! 🚀The caching looks great, but what happens when we load too many models? We should probably kick out the oldest ones when we hit our limit!
Here's a cool way to do it:
def load_model(model_path_to_load: str) -> safe_open | torch.nn.Module: # Check cache first if model_path_to_load in model_cache: log.info(f"Using cached model from {model_path_to_load}") return model_cache[model_path_to_load] + # Evict oldest model if cache is full + if len(model_cache) >= MAX_CACHE_SIZE: + oldest_model = next(iter(model_cache)) + del model_cache[oldest_model] + log.info(f"Evicted {oldest_model} from cache") # Load model if not cached if model_path_to_load.endswith('.pkl'):CODE/_debug.py (1)
13-17
: Let's make those log paths more flexible! 📁The log path is hardcoded which might cause issues if someone runs this from a different directory.
Try this instead:
+import pathlib + +# Get the base directory relative to this file +BASE_DIR = pathlib.Path(__file__).parent.parent +LOG_PATH = BASE_DIR / "ACCESS" / "LOGS" / "DEBUG" / "DEBUG.log" + log = Log( {"log_level": DEBUG, - "filename": "../ACCESS/LOGS/DEBUG/DEBUG.log", + "filename": str(LOG_PATH), "truncate_message": False} )CODE/Logicytics.py (1)
13-17
: Why rename log to log_main? 🤔The renaming from
log
tolog_main
seems unnecessary since we're already importing it directly from logicytics. It might just make things more confusing.Let's keep it simple:
-log_main = Log( +log = Log( {"log_level": DEBUG, "filename": "../ACCESS/LOGS/DEBUG/DEBUG.log", "truncate_message": False} )
🛑 Comments failed to post (3)
CODE/logicytics/__init__.py (1)
88-90: 🛠️ Refactor suggestion
Hey! Let's make this code a bit safer! 🛡️
Two things to think about:
- We're calling
Get.config_data()
twice (check line 17), which is like asking the same question twice! 🤔- The
mkdir()
call could fail if something goes wrong, and we wouldn't know about it!Here's how we can make it better:
-FileManagement.mkdir() -DEBUG, *_ = Get.config_data() +try: + FileManagement.mkdir() +except Exception as e: + print(f"Oops! Couldn't create directory: {e}") + raise log = Log({"log_level": DEBUG}) # Use DEBUG from line 17📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.try: FileManagement.mkdir() except Exception as e: print(f"Oops! Couldn't create directory: {e}") raise log = Log({"log_level": DEBUG}) # Use DEBUG from line 17
CODE/network_psutil.py (1)
18-89: 🛠️ Refactor suggestion
This function is doing way too much! 🎭
The
get_network_info
function is like trying to juggle too many things at once! It's handling:
- Network stats
- Connections
- Interface info
- Bandwidth
- Hostname stuff
Let's split it into smaller, focused functions like:
collect_network_stats()
collect_connections()
measure_bandwidth()
get_host_info()
Want me to show you how to break this down into smaller, easier-to-manage pieces?
CODE/_dev.py (1)
124-126:
⚠️ Potential issueHold up! We should check if the version input is valid! 🛡️
Right now, any input could be accepted as a version number. That could cause problems later!
Let's add some validation:
- _update_ini_file("config.ini", input(f"Enter the new version of the project (Old version is {VERSION}): "), - "version") + while True: + version = input(f"Enter the new version of the project (Old version is {VERSION}): ") + if re.match(r'^\d+\.\d+\.\d+$', version): + _update_ini_file("config.ini", version, "version") + break + else: + print("Please enter a valid version number (e.g., 1.2.3)")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements._update_ini_file("config.ini", files, "files") while True: version = input(f"Enter the new version of the project (Old version is {VERSION}): ") if re.match(r'^\d+\.\d+\.\d+$', version): _update_ini_file("config.ini", version, "version") break else: print("Please enter a valid version number (e.g., 1.2.3)")
Code Climate has analyzed commit b1fd74d and detected 5 issues on this pull request. Here's the issue category breakdown:
View more on Code Climate. |
Prerequisites
--dev
flag, if required.PR Type
update
Description
Reworked how Get.py gets the path of the config.ini (for Flag.py) - Now it gets it perfectly using proper logic and not depending on execution script path,
Removed init files for IDEA
Removed deprecated feature _generate_data.py
Minor Updates for FileManagement.py and config.ini for readability purposes
Allowing _test_gpu_acceleration.py to be imported by function check_gpu(), this although is experimental and is still planned to be removed in the future!
This pull request includes several changes to improve the functionality, remove deprecated code, and update configurations in the
CODE
directory. The most important changes include the addition of a new network utility script, modifications to existing scripts for better type hinting and error handling, and the removal of deprecated functions.New functionality:
CODE/network_psutil.py
: Added a new script to collect and save various network statistics usingpsutil
. This includes network interface stats, connections, addresses, and bandwidth usage.Code improvements:
CODE/VulnScan/tools/_test_gpu_acceleration.py
: Modified thecheck_gpu
function to return strings instead of printing directly, improving testability and reusability.CODE/logicytics/FileManagement.py
: Updated the__remove_files
andand_hash
methods to use type hints and simplified the filename generation logic. [1] [2]Removal of deprecated code:
CODE/VulnScan/v2-deprecated/_generate_data.py
: Removed the entire file, which contained deprecated functions for generating test data.Configuration updates:
CODE/config.ini
: Updated the version to 3.4.0 and added a new filenetwork_psutil.py
to the list of tracked files.CODE/dump_memory.py
: Updated TODO comments to reflect the new version 3.4.1.Minor changes:
CODE/logicytics/Flag.py
: Renamed theMatch
class to_Match
and updated references to improve encapsulation. [1] [2]Credit
N/A
Issues Fixed
N/A
Summary by CodeRabbit
New Features
Chores
Refactor