Skip to content

Commit

Permalink
Merge branch 'dev' into doks-overhaul
Browse files Browse the repository at this point in the history
  • Loading branch information
Maffooch committed Nov 19, 2024
2 parents b85b6ff + 7b8b876 commit ee2e231
Show file tree
Hide file tree
Showing 43 changed files with 1,406 additions and 154 deletions.
2 changes: 2 additions & 0 deletions .github/workflows/release-drafter.yml
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,8 @@ jobs:
steps:
- name: Load OAS files from artifacts
uses: actions/download-artifact@v4
with:
pattern: oas-*

- name: Upload Release Asset - OpenAPI Specification - YAML
id: upload-release-asset-yaml
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile.integration-tests-debian
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@

# code: language=Dockerfile

FROM openapitools/openapi-generator-cli:v7.9.0@sha256:bb32f5f0c9f5bdbb7b00959e8009de0230aedc200662701f05fc244c36f967ba AS openapitools
FROM openapitools/openapi-generator-cli:v7.10.0@sha256:f2054a5a7908ad81017d0f0839514ba5eab06ae628914ff71554d46fac1bcf7a AS openapitools
FROM python:3.11.9-slim-bookworm@sha256:8c1036ec919826052306dfb5286e4753ffd9d5f6c24fbc352a5399c3b405b57e AS build
WORKDIR /app
RUN \
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile.nginx-alpine
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ COPY manage.py ./
COPY dojo/ ./dojo/
RUN env DD_SECRET_KEY='.' python3 manage.py collectstatic --noinput && true

FROM nginx:1.27.2-alpine@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
FROM nginx:1.27.2-alpine@sha256:74175cf34632e88c6cfe206897cbfe2d2fecf9bf033c40e7f9775a3689e8adc7
ARG uid=1001
ARG appuser=defectdojo
COPY --from=collectstatic /app/static/ /usr/share/nginx/html/static/
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile.nginx-debian
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ COPY dojo/ ./dojo/

RUN env DD_SECRET_KEY='.' python3 manage.py collectstatic --noinput && true

FROM nginx:1.27.2-alpine@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
FROM nginx:1.27.2-alpine@sha256:74175cf34632e88c6cfe206897cbfe2d2fecf9bf033c40e7f9775a3689e8adc7
ARG uid=1001
ARG appuser=defectdojo
COPY --from=collectstatic /app/static/ /usr/share/nginx/html/static/
Expand Down
Empty file removed components/node_modules/.gitkeep
Empty file.
2 changes: 1 addition & 1 deletion docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ services:
source: ./docker/extra_settings
target: /app/docker/extra_settings
postgres:
image: postgres:17.0-alpine@sha256:d388be15cfb665c723da47cccdc7ea5c003ed71f700c5419bbd075033227ce1f
image: postgres:17.1-alpine@sha256:0d9624535618a135c5453258fd629f4963390338b11aaffb92292c12df3a6c17
environment:
POSTGRES_DB: ${DD_DATABASE_NAME:-defectdojo}
POSTGRES_USER: ${DD_DATABASE_USER:-defectdojo}
Expand Down
8 changes: 4 additions & 4 deletions docs/content/en/open_source/archived_docs/usage/features.md
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ The environment variable will override the settings in `settings.dist.py`, repla

The available algorithms are:

DEDUPE_ALGO_UNIQUE_ID_FROM_TOOL
DEDUPE_ALGO_UNIQUE_ID_FROM_TOOL (value for `DD_DEDUPLICATION_ALGORITHM_PER_PARSER`: `unique_id_from_tool`)
: The deduplication occurs based on
finding.unique_id_from_tool which is a unique technical
id existing in the source tool. Few scanners populate this
Expand All @@ -266,12 +266,12 @@ DEDUPE_ALGO_UNIQUE_ID_FROM_TOOL
able to recognise that findings found in previous
scans are actually the same as the new findings.

DEDUPE_ALGO_HASH_CODE
DEDUPE_ALGO_HASH_CODE (value for `DD_DEDUPLICATION_ALGORITHM_PER_PARSER`: `hash_code`)
: The deduplication occurs based on finding.hash_code. The
hash_code itself is configurable for each scanner in
parameter `HASHCODE_FIELDS_PER_SCANNER`.

DEDUPE_ALGO_UNIQUE_ID_FROM_TOOL_OR_HASH_CODE
DEDUPE_ALGO_UNIQUE_ID_FROM_TOOL_OR_HASH_CODE (value for `DD_DEDUPLICATION_ALGORITHM_PER_PARSER`: `unique_id_from_tool_or_hash_code`)
: A finding is a duplicate with another if they have the same
unique_id_from_tool OR the same hash_code.

Expand All @@ -284,7 +284,7 @@ DEDUPE_ALGO_UNIQUE_ID_FROM_TOOL_OR_HASH_CODE
cross-parser deduplication


DEDUPE_ALGO_LEGACY
DEDUPE_ALGO_LEGACY (value for `DD_DEDUPLICATION_ALGORITHM_PER_PARSER`: `legacy`)
: This is algorithm that was in place before the configuration
per parser was made possible, and also the default one for
backward compatibility reasons.
Expand Down
45 changes: 45 additions & 0 deletions dojo/api_v2/serializers.py
Original file line number Diff line number Diff line change
Expand Up @@ -417,6 +417,51 @@ class Meta:
fields = "__all__"


class MetadataSerializer(serializers.Serializer):
name = serializers.CharField(max_length=120)
value = serializers.CharField(max_length=300)


class MetaMainSerializer(serializers.Serializer):
id = serializers.IntegerField(read_only=True)

product = serializers.PrimaryKeyRelatedField(
queryset=Product.objects.all(),
required=False,
default=None,
allow_null=True,
)
endpoint = serializers.PrimaryKeyRelatedField(
queryset=Endpoint.objects.all(),
required=False,
default=None,
allow_null=True,
)
finding = serializers.PrimaryKeyRelatedField(
queryset=Finding.objects.all(),
required=False,
default=None,
allow_null=True,
)
metadata = MetadataSerializer(many=True)

def validate(self, data):
product_id = data.get("product", None)
endpoint_id = data.get("endpoint", None)
finding_id = data.get("finding", None)
metadata = data.get("metadata")

for item in metadata:
# this will only verify that one and only one of product, endpoint, or finding is passed...
DojoMeta(product=product_id,
endpoint=endpoint_id,
finding=finding_id,
name=item.get("name"),
value=item.get("value")).clean()

return data


class ProductMetaSerializer(serializers.ModelSerializer):
class Meta:
model = DojoMeta
Expand Down
78 changes: 78 additions & 0 deletions dojo/api_v2/views.py
Original file line number Diff line number Diff line change
Expand Up @@ -1650,6 +1650,61 @@ class DojoMetaViewSet(
def get_queryset(self):
return get_authorized_dojo_meta(Permissions.Product_View)

@extend_schema(
methods=["post", "patch"],
request=serializers.MetaMainSerializer,
responses={status.HTTP_200_OK: serializers.MetaMainSerializer},
filters=False,
)
@action(
detail=False, methods=["post", "patch"], pagination_class=None,
)
def batch(self, request, pk=None):
serialized_data = serializers.MetaMainSerializer(data=request.data)
if serialized_data.is_valid(raise_exception=True):
if request.method == "POST":
self.process_post(request.data)
if request.method == "PATCH":
self.process_patch(request.data)

return Response(status=status.HTTP_201_CREATED, data=serialized_data.data)

def process_post(self: object, data: dict):
product = Product.objects.filter(id=data.get("product")).first()
finding = Finding.objects.filter(id=data.get("finding")).first()
endpoint = Endpoint.objects.filter(id=data.get("endpoint")).first()
metalist = data.get("metadata")
for metadata in metalist:
try:
DojoMeta.objects.create(
product=product,
finding=finding,
endpoint=endpoint,
name=metadata.get("name"),
value=metadata.get("value"),
)
except (IntegrityError) as ex: # this should not happen as the data was validated in the batch call
raise ValidationError(str(ex))

def process_patch(self: object, data: dict):
product = Product.objects.filter(id=data.get("product")).first()
finding = Finding.objects.filter(id=data.get("finding")).first()
endpoint = Endpoint.objects.filter(id=data.get("endpoint")).first()
metalist = data.get("metadata")
for metadata in metalist:
dojometa = DojoMeta.objects.filter(product=product, finding=finding, endpoint=endpoint, name=metadata.get("name"))
if dojometa:
try:
dojometa.update(
name=metadata.get("name"),
value=metadata.get("value"),
)
except (IntegrityError) as ex:
raise ValidationError(str(ex))
else:
msg = f"Metadata {metadata.get('name')} not found for object."
raise ValidationError(msg)


@extend_schema_view(**schema_with_prefetch())
class ProductViewSet(
Expand Down Expand Up @@ -3087,6 +3142,29 @@ class QuestionnaireEngagementSurveyViewSet(
def get_queryset(self):
return Engagement_Survey.objects.all().order_by("id")

@extend_schema(
request=OpenApiTypes.NONE,
parameters=[
OpenApiParameter(
"engagement_id", OpenApiTypes.INT, OpenApiParameter.PATH,
),
],
responses={status.HTTP_200_OK: serializers.QuestionnaireAnsweredSurveySerializer},
)
@action(
detail=True, methods=["post"], url_path=r"link_engagement/(?P<engagement_id>\d+)",
)
def link_engagement(self, request, pk, engagement_id):
# Get the answered survey
engagement_survey = self.get_object()
# Safely get the engagement
engagement = get_object_or_404(Engagement.objects, pk=engagement_id)
# Link the engagement
answered_survey, _ = Answered_Survey.objects.get_or_create(engagement=engagement, survey=engagement_survey)
# Send a favorable response
serialized_answered_survey = serializers.QuestionnaireAnsweredSurveySerializer(answered_survey)
return Response(serialized_answered_survey.data)


@extend_schema_view(**schema_with_prefetch())
class QuestionnaireAnsweredSurveyViewSet(
Expand Down
7 changes: 5 additions & 2 deletions dojo/forms.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
import re
import warnings
from datetime import date, datetime
from pathlib import Path

import tagulous
from crispy_forms.bootstrap import InlineCheckboxes, InlineRadios
Expand Down Expand Up @@ -754,7 +755,8 @@ class UploadThreatForm(forms.Form):

def clean(self):
if (file := self.cleaned_data.get("file", None)) is not None:
ext = os.path.splitext(file.name)[1] # [0] returns path+filename
path = Path(file.name)
ext = path.suffix
valid_extensions = [".jpg", ".png", ".pdf"]
if ext.lower() not in valid_extensions:
if accepted_extensions := f"{', '.join(valid_extensions)}":
Expand Down Expand Up @@ -872,7 +874,8 @@ def clean(self):
for form in self.forms:
file = form.cleaned_data.get("file", None)
if file:
ext = os.path.splitext(file.name)[1] # [0] returns path+filename
path = Path(file.name)
ext = path.suffix
valid_extensions = settings.FILE_UPLOAD_TYPES
if ext.lower() not in valid_extensions:
if accepted_extensions := f"{', '.join(valid_extensions)}":
Expand Down
4 changes: 3 additions & 1 deletion dojo/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,9 @@ def __init__(self, directory=None, keep_basename=False, keep_ext=True):
self.keep_ext = keep_ext

def __call__(self, model_instance, filename):
base, ext = os.path.splitext(filename)
path = Path(filename)
base = path.parent / path.stem
ext = path.suffix
filename = f"{base}_{uuid4()}" if self.keep_basename else str(uuid4())
if self.keep_ext:
filename += ext
Expand Down
2 changes: 1 addition & 1 deletion dojo/settings/.settings.dist.py.sha256sum
Original file line number Diff line number Diff line change
@@ -1 +1 @@
fc660db6c2f55181fd8515d9b13c75197d8272c5c635235f6f60e4b1fc77af04
01215b397651163c0403b028adb08b18fa83c4abb188b0536dfb9e43eddcd9cd
21 changes: 21 additions & 0 deletions dojo/settings/settings.dist.py
Original file line number Diff line number Diff line change
Expand Up @@ -1301,6 +1301,12 @@ def saml2_attrib_map_format(dict):
if len(env("DD_HASHCODE_FIELDS_PER_SCANNER")) > 0:
env_hashcode_fields_per_scanner = json.loads(env("DD_HASHCODE_FIELDS_PER_SCANNER"))
for key, value in env_hashcode_fields_per_scanner.items():
if not isinstance(value, list):
msg = f"Fields definition '{value}' for hashcode calculation of '{key}' is not valid. It needs to be list of strings but it is {type(value)}."
raise TypeError(msg)
if not all(isinstance(field, str) for field in value):
msg = f"Fields for hashcode calculation for {key} are not valid. It needs to be list of strings. Some of fields are not string."
raise AttributeError(msg)
if key in HASHCODE_FIELDS_PER_SCANNER:
logger.info(f"Replacing {key} with value {value} (previously set to {HASHCODE_FIELDS_PER_SCANNER[key]}) from env var DD_HASHCODE_FIELDS_PER_SCANNER")
HASHCODE_FIELDS_PER_SCANNER[key] = value
Expand Down Expand Up @@ -1382,6 +1388,13 @@ def saml2_attrib_map_format(dict):
# Makes it possible to deduplicate on a technical id (same parser) and also on some functional fields (cross-parsers deduplication)
DEDUPE_ALGO_UNIQUE_ID_FROM_TOOL_OR_HASH_CODE = "unique_id_from_tool_or_hash_code"

DEDUPE_ALGOS = [
DEDUPE_ALGO_LEGACY,
DEDUPE_ALGO_UNIQUE_ID_FROM_TOOL,
DEDUPE_ALGO_HASH_CODE,
DEDUPE_ALGO_UNIQUE_ID_FROM_TOOL_OR_HASH_CODE,
]

# Allows to deduplicate with endpoints if endpoints is not included in the hashcode.
# Possible values are: scheme, host, port, path, query, fragment, userinfo, and user. For a details description see https://hyperlink.readthedocs.io/en/latest/api.html#attributes.
# Example:
Expand Down Expand Up @@ -1532,6 +1545,9 @@ def saml2_attrib_map_format(dict):
if len(env("DD_DEDUPLICATION_ALGORITHM_PER_PARSER")) > 0:
env_dedup_algorithm_per_parser = json.loads(env("DD_DEDUPLICATION_ALGORITHM_PER_PARSER"))
for key, value in env_dedup_algorithm_per_parser.items():
if value not in DEDUPE_ALGOS:
msg = f"DEDUP algorithm '{value}' for '{key}' is not valid. Use one of following values: {', '.join(DEDUPE_ALGOS)}"
raise AttributeError(msg)
if key in DEDUPLICATION_ALGORITHM_PER_PARSER:
logger.info(f"Replacing {key} with value {value} (previously set to {DEDUPLICATION_ALGORITHM_PER_PARSER[key]}) from env var DD_DEDUPLICATION_ALGORITHM_PER_PARSER")
DEDUPLICATION_ALGORITHM_PER_PARSER[key] = value
Expand Down Expand Up @@ -1750,9 +1766,14 @@ def saml2_attrib_map_format(dict):
"ELSA": "https://linux.oracle.com/errata/&&.html", # e.g. https://linux.oracle.com/errata/ELSA-2024-12714.html
"ELBA": "https://linux.oracle.com/errata/&&.html", # e.g. https://linux.oracle.com/errata/ELBA-2024-7457.html
"RXSA": "https://errata.rockylinux.org/", # e.g. https://errata.rockylinux.org/RXSA-2024:4928
"C-": "https://hub.armosec.io/docs/", # e.g. https://hub.armosec.io/docs/c-0085
"AVD": "https://avd.aquasec.com/misconfig/", # e.g. https://avd.aquasec.com/misconfig/avd-ksv-01010
"KHV": "https://avd.aquasec.com/misconfig/kubernetes/", # e.g. https://avd.aquasec.com/misconfig/kubernetes/khv045
"CAPEC": "https://capec.mitre.org/data/definitions/&&.html", # e.g. https://capec.mitre.org/data/definitions/157.html
"CWE": "https://cwe.mitre.org/data/definitions/&&.html", # e.g. https://cwe.mitre.org/data/definitions/79.html
"TEMP": "https://security-tracker.debian.org/tracker/", # e.g. https://security-tracker.debian.org/tracker/TEMP-0841856-B18BAF
"DSA": "https://security-tracker.debian.org/tracker/", # e.g. https://security-tracker.debian.org/tracker/DSA-5791-1
"RLSA": "https://errata.rockylinux.org/", # e.g. https://errata.rockylinux.org/RLSA-2024:7001
}
# List of acceptable file types that can be uploaded to a given object via arbitrary file upload
FILE_UPLOAD_TYPES = env("DD_FILE_UPLOAD_TYPES")
Expand Down
2 changes: 2 additions & 0 deletions dojo/templatetags/display_tags.py
Original file line number Diff line number Diff line change
Expand Up @@ -780,6 +780,8 @@ def vulnerability_url(vulnerability_id):

for key in settings.VULNERABILITY_URLS:
if vulnerability_id.upper().startswith(key):
if key in ["AVD", "KHV", "C-"]:
return settings.VULNERABILITY_URLS[key] + str(vulnerability_id.lower())
if "&&" in settings.VULNERABILITY_URLS[key]:
# Process specific keys specially if need
if key in ["CAPEC", "CWE"]:
Expand Down
3 changes: 2 additions & 1 deletion dojo/tools/aws_prowler_v3plus/prowler_v4.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,8 @@ def process_ocsf_json(self, file, test):
documentation = deserialized.get("remediation", {}).get("references", "")
documentation = str(documentation) + "\n" + str(deserialized.get("unmapped", {}).get("related_url", ""))
security_domain = deserialized.get("resources", [{}])[0].get("type", "")
timestamp = deserialized.get("event_time")
# Prowler v4.5.0 changed 'event_time' key in report with 'time_dt'
timestamp = deserialized.get("time_dt") or deserialized.get("event_time")
resource_arn = deserialized.get("resources", [{}])[0].get("uid", "")
resource_id = deserialized.get("resources", [{}])[0].get("name", "")
unique_id_from_tool = deserialized.get("finding_info", {}).get("uid", "")
Expand Down
2 changes: 1 addition & 1 deletion dojo/tools/bearer_cli/parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ def get_findings(self, file, test):
finding = Finding(
title=bearerfinding["title"] + " in " + bearerfinding["filename"] + ":" + str(bearerfinding["line_number"]),
test=test,
description=bearerfinding["description"] + "\n Detected code snippet: \n" + bearerfinding["snippet"],
description=bearerfinding["description"] + "\n Detected code snippet: \n" + bearerfinding.get("snippet", bearerfinding.get("code_extract")),
severity=severity,
cwe=bearerfinding["cwe_ids"][0],
static_finding=True,
Expand Down
Loading

0 comments on commit ee2e231

Please sign in to comment.