Skip to content

Commit

Permalink
Feature: Azure support & Modularity (#14)
Browse files Browse the repository at this point in the history
* Feature: Support for azure

* Feature: Azure support & Modularity

---------

Authored-by: Vishnu <vishnu>
  • Loading branch information
vishnuvnn authored Sep 24, 2023
1 parent ba26abf commit 2edab3b
Show file tree
Hide file tree
Showing 7 changed files with 261 additions and 113 deletions.
4 changes: 2 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Use an official Python runtime as a parent image
FROM python:3.8-slim
FROM python:3.11-alpine

# Set the working directory to /app
WORKDIR /app
Expand All @@ -11,7 +11,7 @@ COPY . /app
RUN pip install --trusted-host pypi.python.org -r requirements.txt

# Install Nmap
RUN apt-get update && apt-get install -y nmap
RUN apk update && apk add nmap nmap-scripts

# Make the Python script executable
RUN chmod +x exporter.py
Expand Down
139 changes: 77 additions & 62 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@

**Description**:

This Docker application sets up the Nmap Prometheus Exporter, a versatile Python utility designed to scan and monitor network hosts and services using Nmap. It exposes the scan results and statistics in Prometheus-compatible format. This exporter helps network administrators and DevOps teams gain insights into their network infrastructure, making it easier to detect changes, assess security, and maintain network health.
This Docker application sets up the Nmap Prometheus Exporter, a versatile Python utility designed to scan and monitor network hosts and services using Nmap. It exposes the scan results and statistics in a Prometheus-compatible format. This exporter helps network administrators and DevOps teams gain insights into their network infrastructure, making it easier to detect changes, assess security, and maintain network health.

**Key Features**:

- **Dockerized**: Easily deploy the Nmap Prometheus Exporter as a Docker container.
- **Cross-Platform**: Platform-independent codebase ensuring compatibility with various operating systems.
- **Automated Scanning**: Regularly scans a list of target IP addresses defined in the `portscanip.nmap` file.
- **Automated Scanning**: Regularly scans a list of target IP addresses by dynamically fetching from Azure or a file.
- **Prometheus Integration**: Exposes scan results and statistics as Prometheus metrics for easy monitoring and alerting.
- **Customizable**: Easily configure the scan frequency, target file, and Prometheus port.
- **Efficient**: Uses the Nmap library for efficient and comprehensive network scanning.
Expand All @@ -25,86 +25,105 @@ Before running the Docker application, ensure you have the following prerequisit
- [Docker](https://docs.docker.com/get-docker/)
- [Docker Compose](https://docs.docker.com/compose/install/)


## Usage

1. **Clone this repository** to your local machine:

```bash
git clone https://github.com/your-username/nmap-prometheus-exporter.git
```
git clone https://github.com/your-username/nmap-prometheus-exporter.git``

2. **Navigate to the project directory**:

```bash
cd nmap-prometheus-exporter
```

3. Create a `portscanip.nmap` file in the project directory with a list of target IP addresses to scan.

4. Customize the scanning parameters and frequency by modifying the `docker-compose.yml` file:

- `SCAN_FILE`: Path to the `portscanip.nmap` file inside the container.
- `EXPORTER_PORT`: Port to expose Prometheus metrics.
- `SCAN_FREQUENCY`: Frequency of Nmap scans in seconds.
5. **Build the Docker image**:
`cd nmap-prometheus-exporter`

3. Create a `.env` file in the project directory with your environment variables. See the example in the `.env` section below.


4. **Build the Docker image**:

```bash
docker-compose build
```
`docker-compose build`

6. **Start the Docker container**:
5. **Start the Docker container**:

`docker-compose up -d`

```bash
docker-compose up -d
```
6. **Access Prometheus metrics** at `http://localhost:9808/metrics` (assuming you are running this on your local machine). Adjust the URL as needed based on your environment.

7. **Access Prometheus metrics** at `http://localhost:9808/metrics` (assuming you are running this on your local machine). Adjust the URL as needed based on your environment.
7. To stop and remove the container, use the following command:

8. To stop and remove the container, use the following command:

```bash
docker-compose down
```
`docker-compose down`


### Environment Variables (`.env` file)

Create a `.env` file in the project directory with the following variables:

If the list of IPs needs to be fetched from `azure` :
Replace the placeholders (`your_azure_client_id`, `your_azure_client_secret`, `your_azure_tenant_id`, and `your_azure_subscription_id`) with your actual Azure credentials.

```bash
TARGET_SOURCE=azure
AZURE_CLIENT_ID=your_azure_client_id
AZURE_CLIENT_SECRET=your_azure_client_secret
AZURE_TENANT_ID=your_azure_tenant_id
SCAN_FREQUENCY=36000
EXPORTER_PORT=9808
```

If the list of IPs need to be fetched from `file` :
Uncomment volume mount part from `docker-compose.yml` & replace `/path/to/your/portscanip.nmap`

Where `portscanip.nmap` is line terminated list of IP addresses


```bash
TARGET_SOURCE=file
TARGET_FILE=/app/portscanip.nmap
SCAN_FREQUENCY=36000
EXPORTER_PORT=9808
```


## Adding Prometheus Target and Alert Rules

To monitor your `nmap-prometheus-exporter` instance effectively, you can configure Prometheus to scrape metrics from it and set up alert rules for potential issues. Here's how you can do it:
### Prometheus Target Configuration
1. Edit your Prometheus configuration file, typically named `prometheus.yml`.
2. Add a new job configuration under `scrape_configs` to specify the target to scrape metrics from your `nmap-prometheus-exporter` instance. Replace `<exporter-host>` with the hostname or IP address where your exporter is running and `<port>` with the configured port (default: 9808).
```yaml
1. Edit your Prometheus configuration file, typically named `prometheus.yml`.
2. Add a new job configuration under `scrape_configs` to specify the target to scrape metrics from your `nmap-prometheus-exporter` instance. Replace `<exporter-host>` with the hostname or IP address where your exporter is running and `<port>` with the configured port (default: 9808).
```yaml
- job_name: nmap
scrape_interval: 60s
scrape_timeout: 30s
metrics_path: "/metrics"
static_configs:
- targets: ['<exporter-host>:<port>']
labels:
cloud: CLOUD_NAME # Replace "CLOUD_NAME" with your cloud provider (aws, azure, gcp or any other)
```
3. Save the `prometheus.yml` file.
4. Restart Prometheus to apply the changes.
cloud: CLOUD_NAME # Replace "CLOUD_NAME" with your cloud provider (aws, azure, gcp, or any other)
```
3. Save the `prometheus.yml` file.
4. Restart Prometheus to apply the changes.
### Alert Rules Configuration
To set up alert rules for your `nmap-prometheus-exporter`, follow these steps:
1. Edit your Prometheus alerting rules file, typically named `alert.rules.yml`.
2. Add your alerting rules to the file. Here's an example rule that alerts when the `nmap-exporter` service is down:
1. Edit your Prometheus alerting rules file, typically named `alert.rules.yml`.
2. Add your alerting rules to the file. Here's an example rule that alerts when the `nmap-exporter` service is down:


yamlCopy code
```yaml
groups:
`groups:
- alert: awsNmapExporterDown
expr: up{job="nmap"} == 0
for: 1m
Expand All @@ -125,18 +144,16 @@ groups:
annotations:
summary: "Port 22 is open to the world on an instance in CLOUD_NAME with IP address {{ $labels.host }}"
description: "Port 22 is open to the world on an instance in "CLOUD_NAME" with IP address {{ $labels.host }}"
```

```

3. Save the `alert.rules.yml` file.

4. Reload Prometheus to apply the new alert rules.
3. Save the `alert.rules.yml` file.

4. Reload Prometheus to apply the new alert rules.


With these configurations in place, Prometheus will scrape metrics from your `nmap-prometheus-exporter`, and alerting rules will trigger alerts based on defined conditions. Customize the alerting rules to fit your monitoring needs.

Remember to adapt the configuration to your specific environment and requirements.

### Generate grafana dashboard
### Generate Grafana Dashboard

To visualize the metrics collected by `nmap-prometheus-exporter` in Grafana, follow these steps:

Expand All @@ -148,16 +165,18 @@ To visualize the metrics collected by `nmap-prometheus-exporter` in Grafana, fol

4. In the Configuration menu, click on "Data Sources."

5. You should now see a list of datasources configured in your Grafana instance.
5. You should now see a list of data sources configured in your Grafana instance.

6. Replace "YOUR_DS_NAME" with the Prometheus data source name where `nmap-prometheus-exporter`'s metrics are present in the following command:

`DATASOURCE="YOUR_DS_NAME" ; sed "s/PROMETHEUS_DS_PLACEHOLDER/$DATASOURCE/g" dashboard_template.json`
bashCopy code

`DATASOURCE="YOUR_DS_NAME" ; sed "s/PROMETHEUS_DS_PLACEHOLDER/$DATASOURCE/g" dashboard_template.json`

Run the command from the repository's root directory to generate the Grafana dashboard for the metrics.

7. You can import this json directly to grafana as a dashboard

7. You can import this JSON directly to Grafana as a dashboard.

### Importing Grafana Dashboard

Expand All @@ -184,13 +203,11 @@ To visualize the metrics collected by `nmap-prometheus-exporter` in Grafana, fol

Now you have successfully imported the Grafana dashboard that visualizes the metrics collected by the `nmap-prometheus-exporter` into your Grafana instance.

Remember to adjust any panel configurations or queries if necessary to align the dashboard with your specific monitoring requirements.
Remember to adapt the configuration to your specific environment and requirements.

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
This project is licensed under the MIT License - see the [LICENSE](https://chat.openai.com/c/LICENSE) file for details.

## Acknowledgments

Expand All @@ -200,6 +217,4 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file
- [Docker Compose](https://docs.docker.com/compose/) - The tool for defining and running multi-container Docker applications.
- [Grafana](https://grafana.com/) - The visualization and monitoring platform.

**Logo Credit:**
The logo design used in this project was crafted with the assistance of [LogoMakr.com/app](https://logomakr.com/app).
We appreciate the creative support from LogoMakr in shaping our visual identity.
**Logo Credit:** The logo design used in this project was crafted with the assistance of [LogoMakr.com/app](https://logomakr.com/app). We appreciate the creative support from LogoMakr in shaping our visual identity.
11 changes: 4 additions & 7 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,12 @@ services:
context: .
dockerfile: Dockerfile
container_name: nmap-prometheus-exporter
volumes:
# Mount the host file to the container -> change host path to your need
- /path/to/your/portscanip.nmap:/app/portscanip.nmap
# # Mount the host file to the container -> change host path to your need
# volumes:
# - /path/to/your/portscanip.nmap:/app/portscanip.nmap
ports:
- "9808:9808"
environment:
- SCAN_FILE=/app/portscanip.nmap
- EXPORTER_PORT=9808
- SCAN_FREQUENCY=3600
env_file: .env # Specify the path to your .env file here
restart: always
networks:
- exporter_network
Expand Down
89 changes: 48 additions & 41 deletions exporter.py
Original file line number Diff line number Diff line change
@@ -1,45 +1,18 @@
#!/usr/bin/env python3

from __future__ import absolute_import
import prometheus_client
import time
import sys
import os
import nmap
import logging
from modules.ip_fetcher import fetch_azure_ips, fetch_ips_from_file
from modules.prometheus_format import expose_nmap_scan_results, expose_nmap_scan_stats, start_prometheus_server

# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

# Create Prometheus metrics without clearing them
metric_results = prometheus_client.Gauge("nmap_scan_results",
"Holds the scanned result",
["host",
"protocol",
"name",
"product_detected"])
metric_info = prometheus_client.Info("nmap_scan_stats",
"Holds details about the scan")

# Exposes results of the scan in Prometheus format
def nmap_scan_results(nm):
list_scanned_items = []

for line in str(nm.csv()).splitlines():
list_scanned_items.append(line)

for line in list_scanned_items[1:]:
host, _, _, prot, port, name, _, prod, *_ = line.split(";")
metric_results.labels(host, prot, name, prod).set(float(port))

# Exposes stats of the scan in Prometheus format
def nmap_scan_stats(nm):
scanstats = nm.scanstats()
metric_info.info({"time_elapsed": scanstats["elapsed"],
"uphosts": scanstats["uphosts"],
"downhosts": scanstats["downhosts"],
"totalhosts": scanstats["totalhosts"]})

# Main function
def main():
Expand All @@ -53,20 +26,52 @@ def main():
nm = nmap.PortScanner()

while True:
file_name = os.getenv('SCAN_FILE', '/app/portscanip.nmap')
try:
with open(file_name, 'r') as f:
targets = f.read().replace("\n", " ").strip()
logger.info("Loaded scan targets from %s", file_name)
except OSError as e:
logger.error("Could not open/read file %s: %s", file_name, str(e))
# Fetch targets based on the selected source
target_source = os.getenv('TARGET_SOURCE', 'file')

if target_source == "file":
required_env_vars = ['TARGET_FILE']
missing_vars = [var for var in required_env_vars if os.getenv(var) is None]

if missing_vars:
# Handle the case where some Azure environment variables are missing
error_message = f"The following environment variables are missing: {', '.join(missing_vars)}"
raise EnvironmentError(error_message)
else:
target_file = os.getenv('TARGET_FILE', '/app/portscanip.nmap')
targets = fetch_ips_from_file(target_file)

elif target_source == "azure":
required_env_vars = ['AZURE_CLIENT_ID', 'AZURE_CLIENT_SECRET', 'AZURE_TENANT_ID', 'AZURE_SUBSCRIPTION_ID']
missing_vars = [var for var in required_env_vars if os.getenv(var) is None]

if missing_vars:
# Handle the case where some Azure environment variables are missing
error_message = f"The following Azure environment variables are missing: {', '.join(missing_vars)}"
raise EnvironmentError(error_message)
else:
client_id = os.getenv('AZURE_CLIENT_ID')
client_secret = os.getenv('AZURE_CLIENT_SECRET')
tenant_id = os.getenv('AZURE_TENANT_ID')
subscription_id = os.getenv('AZURE_SUBSCRIPTION_ID')

# Use the function from the modules directory
azure_targets = fetch_azure_ips(client_id, client_secret, tenant_id, subscription_id)
# space seperated string
print(azure_targets)
targets = " ".join(azure_targets)

else:
# Handle the case when the target source is neither "file" nor "azure"
logger.error("Invalid target source specified: %s", target_source)
# Exit with an error code
sys.exit(1)

logger.info("Scanning targets: %s", targets)
try:
nm.scan(targets)
nmap_scan_results(nm)
nmap_scan_stats(nm)
expose_nmap_scan_results(nm)
expose_nmap_scan_stats(nm)
logger.info("Scan completed successfully")
except nmap.nmap.PortScannerError as e:
logger.error("Nmap scan failed: %s", str(e))
Expand All @@ -77,11 +82,13 @@ def main():

except KeyboardInterrupt:
logger.info("Received KeyboardInterrupt. Exiting.")
sys.exit(0) # Exit gracefully
# Exit gracefully
sys.exit(0)
except Exception as e:
logger.error("An unexpected error occurred: %s", str(e))

if __name__ == '__main__':
prometheus_client.start_http_server(int(os.getenv('EXPORTER_PORT', '9808')))
# Pass the desired port as an argument when calling the function
EXPORTER_PORT = int(os.getenv('EXPORTER_PORT', '9808'))
start_prometheus_server(EXPORTER_PORT)
main()

Loading

0 comments on commit 2edab3b

Please sign in to comment.