Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Cannot generate reports #315

Open
rumaf opened this issue Dec 6, 2024 · 9 comments
Open

[BUG] Cannot generate reports #315

rumaf opened this issue Dec 6, 2024 · 9 comments
Assignees
Labels
bug Something isn't working

Comments

@rumaf
Copy link

rumaf commented Dec 6, 2024

Describe the bug
Cannot generate reports in latest versions

Affected versions: 24.10.1 and 24.12.01
Working version: 22.4.51

Steps to reproduce the behavior:
Open the report page in the UI or try to generate a report with gvm-cli

Environment:

  • OS: Ubuntu 24.04
  • Memory available to OS: 4G
  • Container environment: docker

logs

Warning in report page showing:

Error while loading Report 15096f57-583d-43e4-92d5-690522bd70f3
Please try again.
Rejection: Unknown Error
Error
    at new _Rejection (https://REDACTED/assets/index-h_LaV50-.js:40:432310)
    at https://REDACTED/assets/index-h_LaV50-.js:40:436027

http response error:
An internal error occurred while getting a report. The report could not be delivered. Diagnostics: Failure to receive response from manager daemon.

gvm-cli response error:

ERROR:gvmtools.cli:Remote closed the connection

gvmd.log

BACKTRACE: gvmd(+0x6e894) [0x620575244894]
BACKTRACE: /lib/x86_64-linux-gnu/libc.so.6(+0x3c050) [0x7b52e3f36050]
BACKTRACE: /lib/x86_64-linux-gnu/libglib-2.0.so.0(g_utf8_validate+0xc) [0x7b52e424022c]
BACKTRACE: gvmd(strescape_check_utf8+0x1a) [0x6205752dd3da]
BACKTRACE: gvmd(get_certificate_info+0x4f2) [0x62057524cee2]
BACKTRACE: gvmd(+0x8a974) [0x620575260974]
BACKTRACE: gvmd(+0xccf02) [0x6205752a2f02]
BACKTRACE: gvmd(manage_send_report+0x25d) [0x6205752accbd]
BACKTRACE: gvmd(+0x125b26) [0x6205752fbb26]
BACKTRACE: /lib/x86_64-linux-gnu/libglib-2.0.so.0(+0x57fa2) [0x7b52e420dfa2]
BACKTRACE: /lib/x86_64-linux-gnu/libglib-2.0.so.0(g_markup_parse_context_parse+0xc22) [0x7b52e420ecc2]
BACKTRACE: gvmd(process_gmp_client_input+0x4b) [0x6205753135db]
BACKTRACE: gvmd(serve_gmp+0x41c) [0x620575248e8c]
BACKTRACE: gvmd(+0x6eaa3) [0x620575244aa3]
BACKTRACE: gvmd(+0x6f151) [0x620575245151]
BACKTRACE: gvmd(gvmd+0x1b5f) [0x62057524809f]
BACKTRACE: /lib/x86_64-linux-gnu/libc.so.6(+0x2724a) [0x7b52e3f2124a]
BACKTRACE: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85) [0x7b52e3f21305]
BACKTRACE: gvmd(_start+0x21) [0x620575244711]
MESSAGE: Received Segmentation fault signal

This is only affecting some scans.
With gvm-cli if I remove the filters it generates a report although incomplete.

@immauss
Copy link
Owner

immauss commented Dec 6, 2024

Can you share more detail on the gvm-cli command. I expect this is more a Greenbone issue and I might not be able to help, but I need more detail to understand where the problem is?

@rumaf
Copy link
Author

rumaf commented Dec 7, 2024

Sure,

gvm-cli tls --xml "<get_reports report_id='REPORT_ID_HERE' format_id='c402cc3e-b531-11e1-9163-406186ea4fc5' details='True' filter='apply_overrides=0 levels=hml rows=100 min_qod=70 first=1 sort-reverse=severity ignore_pagination=1'/>"

@Paiii3
Copy link

Paiii3 commented Dec 28, 2024

I had the same issue. @rumaf Can you fix it?

image

Note:
I scanned many tasks (more than 20) but found just one task with this issue. I tested multiple scenarios to find a fixed solution. Unfortunately, it hasn't been successful yet. Below is my testing

  • rescan the existing task more than 3 times ----> still found this issue
  • clone the existing task to the new one and rescan ----> still found this issue
  • My target is 10.10.10.0/24. Then I split the target into 4 targets (/26)
    1. 10.10.10.0/26 ----> no found this issue
    2. 10.10.10.64/26 ----> no found this issue
    3. 10.10.10.128/26 ----> no found this issue
    4. 10.10.10.192/26 ----> still found this issue
  • update docker images and create a new container, then try to export the report again. ----> still found this issue

@immauss
Copy link
Owner

immauss commented Jan 6, 2025

I'm at a complete loss on this one guys.

If you are still getting this with the latest image, which is currently using the latest versions of everything from Greenbone, then this seems like an internal to gvmd error, not a containerization issue.

Please try posting this over on Greenbone's forum to see if anyone there has an idea.

https://forum.greenbone.net/

Feel free to reference this thread.

Thanks,
-Scott

@immauss immauss added the bug Something isn't working label Jan 6, 2025
@rumaf
Copy link
Author

rumaf commented Jan 6, 2025

My "fix" was to rollback to 22.4.51 for now.
I don't have enough bandwidth to allocate to this right now.
I did tests similar to @Paiii3, trying to isolate the problem, testing multiple scenarios, but the problem persists, weirdly only on some tasks.

@immauss
Copy link
Owner

immauss commented Jan 6, 2025

Oh .... So it worked in 22.4.51 . . . . .
Sorry .... I did not catch that.
Let me take a look and see what could have changed. . . .
Might still be the same answer though, but will give us more info to pass to Greenbone.

-Scott

@immauss
Copy link
Owner

immauss commented Jan 6, 2025

OK .... Looks like the only changes from 24.4.51 and 24.10.1 were some additions to the postfix config and gvmd upgraded from 23.8.1 to 24.0.0.
So updates from GB to gvmd are the most likely problem here.

-Scott

@TotalGriffLock
Copy link

TotalGriffLock commented Jan 7, 2025

Don't know if this is helpful as it does look like an upstream issue, but I updated from 24.10.1 to latest today and the issue is still there. From the logs:

==> /usr/local/var/log/gvm/gvmd.log <==
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: gvmd(+0x6e974) [0x5c04aa47a974]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: /lib/x86_64-linux-gnu/libc.so.6(+0x3c050) [0x74ee67cf3050]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: /lib/x86_64-linux-gnu/libglib-2.0.so.0(g_utf8_validate+0xc) [0x74ee67ffd22c]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: gvmd(strescape_check_utf8+0x1a) [0x5c04aa51344a]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: gvmd(get_certificate_info+0x4f2) [0x5c04aa483412]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: gvmd(+0x8b314) [0x5c04aa497314]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: gvmd(+0xcb102) [0x5c04aa4d7102]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: gvmd(manage_send_report+0x25d) [0x5c04aa4e0a1d]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: gvmd(+0x126606) [0x5c04aa532606]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: /lib/x86_64-linux-gnu/libglib-2.0.so.0(+0x57fa2) [0x74ee67fcafa2]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: /lib/x86_64-linux-gnu/libglib-2.0.so.0(g_markup_parse_context_parse+0xc22) [0x74ee67fcbcc2]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: gvmd(process_gmp_client_input+0x4b) [0x5c04aa54a29b]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: gvmd(serve_gmp+0x41c) [0x5c04aa47ef5c]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: gvmd(+0x6eb83) [0x5c04aa47ab83]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: gvmd(+0x6f231) [0x5c04aa47b231]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: gvmd(gvmd+0x1b4f) [0x5c04aa47e16f]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: /lib/x86_64-linux-gnu/libc.so.6(+0x2724a) [0x74ee67cde24a]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85) [0x74ee67cde305]
md   main:MESSAGE:2025-01-07 14h02.26 utc:2655: BACKTRACE: gvmd(_start+0x21) [0x5c04aa47a7f1]
md manage:MESSAGE:2025-01-07 14h02.26 utc:2655: Received Segmentation fault signal
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: gvmd(+0x6e974) [0x5c04aa47a974]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: /lib/x86_64-linux-gnu/libc.so.6(+0x3c050) [0x74ee67cf3050]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: /lib/x86_64-linux-gnu/libglib-2.0.so.0(g_utf8_validate+0xc) [0x74ee67ffd22c]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: gvmd(strescape_check_utf8+0x1a) [0x5c04aa51344a]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: gvmd(get_certificate_info+0x4f2) [0x5c04aa483412]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: gvmd(+0x8b314) [0x5c04aa497314]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: gvmd(+0xcb102) [0x5c04aa4d7102]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: gvmd(manage_send_report+0x25d) [0x5c04aa4e0a1d]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: gvmd(+0x126606) [0x5c04aa532606]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: /lib/x86_64-linux-gnu/libglib-2.0.so.0(+0x57fa2) [0x74ee67fcafa2]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: /lib/x86_64-linux-gnu/libglib-2.0.so.0(g_markup_parse_context_parse+0xc22) [0x74ee67fcbcc2]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: gvmd(process_gmp_client_input+0x4b) [0x5c04aa54a29b]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: gvmd(serve_gmp+0x41c) [0x5c04aa47ef5c]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: gvmd(+0x6eb83) [0x5c04aa47ab83]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: gvmd(+0x6f231) [0x5c04aa47b231]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: gvmd(gvmd+0x1b4f) [0x5c04aa47e16f]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: /lib/x86_64-linux-gnu/libc.so.6(+0x2724a) [0x74ee67cde24a]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85) [0x74ee67cde305]
md   main:MESSAGE:2025-01-07 14h02.32 utc:2661: BACKTRACE: gvmd(_start+0x21) [0x5c04aa47a7f1]
md manage:MESSAGE:2025-01-07 14h02.32 utc:2661: Received Segmentation fault signal

There is a forum post regarding it here: https://forum.greenbone.net/t/error-while-loading-report/19868/5
I can also confirm 24.0.1 worked, but that image is no longer available on docker hub.

@rumaf
Copy link
Author

rumaf commented Jan 8, 2025

This issue happened again today on version 22.4.51, confirming it also happens with this version and that downgrading to version 22.4.51 will not stop this from happening.
Upon examining my monitoring dashboard, I found that another container filled the disk around the same time my tasks were running.

Weirdly, the problematic task is always the same, and no other tasks are affected. I'm not sure if this is related to the low disk space, but I'm going to isolate the container in a VM with increased disk size to see if the issue persists.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants