You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
is there really no test case that checks if a kernel warning or bug occurred during the runtime of all or a specific test suite?
I already asked the question in [1] a few month ago but got no answer.
From time to time we can reproduce warnings or even bugs during the runtime of our own test suite.
It took some time to realize that we have real problems as LAVA jobs were marked as "green" or successful.
Yes, of course, I could implement such a check on my own (inline testsuite or even a complete test definition repo) but isn't that something that should be provided here as well?
I wouldn't call it a "suite", but the KernelCI project runs a simple dmesg check script that will run in a generic LAVA environment with some minor changes. It could be a useful starting point for you.
Maybe we could get something similar added to this repo?
is there really no test case that checks if a kernel warning or bug occurred during the runtime of all or a specific test suite?
You can parse dmesg output after a set of tests.
I already asked the question in [1] a few month ago but got no answer. From time to time we can reproduce warnings or even bugs during the runtime of our own test suite. It took some time to realize that we have real problems as LAVA jobs were marked as "green" or successful.
Yes, of course, I could implement such a check on my own (inline testsuite or even a complete test definition repo) but isn't that something that should be provided here as well?
It should be possible to extend the current LAVA code to support such use-case.
Hi all,
is there really no test case that checks if a kernel warning or bug occurred during the runtime of all or a specific test suite?
I already asked the question in [1] a few month ago but got no answer.
From time to time we can reproduce warnings or even bugs during the runtime of our own test suite.
It took some time to realize that we have real problems as LAVA jobs were marked as "green" or successful.
Yes, of course, I could implement such a check on my own (inline testsuite or even a complete test definition repo) but isn't that something that should be provided here as well?
Did I overlook something?
[1] https://git.lavasoftware.org/lava/lava/-/issues/576
The text was updated successfully, but these errors were encountered: