-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] z_pub returns munmap_chunk(): invalid pointer #284
Comments
It looks that there is a memory leak somewhere...either due to the protocol refactor or the latest code reorganization, since they were the biggest changes in the recent past. Also, I suggest that throughput tests to be run for at least 5min (payloads from 8bytes to 1024bytes at least) and to monitor the memory usage. This was a common practice from myself in the past. |
I think it makes more sense to run it quickly through the MacOS memory profilers. No need for running for 5min, if there is a leak that will show quite quickly. |
Indeed, and Valgrind can also be used for that. |
I've run the steps through Valgrind with the following result:
It looks like the problem is with the example rather than Zenoh Pico itself. In the case that a user has overridden the value to be published (with the -v option), the example attempts to free the |
Where the value has been overridden by the user the value pointer will point to memory within the argv array which should not be free'd.
This PR removes the invalid free. I've also checked the other examples for similar invalid frees and non were found. |
Describe the bug
z_pub
example crashes withmunmap_chunk(): invalid pointer
when sending large data.The same crash happens when compiling Zenoh-Pico both in Debug and Release mode.
To reproduce
Start a Zenoh router:
Start a Zenoh-Pico subscriber:
Start a Zenoh-Pico publisher with large data:
After few publications, the publisher terminates with:
munmap_chunk(): invalid pointer Aborted (core dumped)
System info
The text was updated successfully, but these errors were encountered: