You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment we do not have any test, but as the tool itself is quite small, this is totally OK (citation needed).
However, I had an offline conversation with @gmeghnag and he was open to start adding some tests, and maybe setup some GitHub Action to avoid regressions, etc.
After some research it seems that the mainstream way to test cobra application is to refactor the commands to be generated on the fly from "factory" functions (1), instead of commands being variables floating in their package namespace(s). Also another thing that seems to be quite popular (i.e. makes testing much easier) in other CLI toolsis to have methods (2) (instead of functions) for everything from completion, to validation, and even for and running (8) the commands.
A way to avoid this refactoring, would be to look into annotations (6), but, tbh, I can not find many examples in upstream project going that path.
Now, to follow what the cobra library and kubectl do for testing, we could start by breaking down this task and first take the business logic that does not stricly belong to Run() (7), and move it into more appropriate functions (or even directly to methods) as Validate (2), etc.
Once we're done with that, we might move on to actually implement test cases.
I'll go ahead an create a stub to test out some simple part of omc, and we can value from there whether this is worth the refactoring or not, how to approach it, and all the details we should think of before actually proceeding.
At the moment we do not have any test, but as the tool itself is quite small, this is totally OK (citation needed).
However, I had an offline conversation with @gmeghnag and he was open to start adding some tests, and maybe setup some GitHub Action to avoid regressions, etc.
After some research it seems that the mainstream way to test cobra application is to refactor the commands to be generated on the fly from "factory" functions (1), instead of commands being variables floating in their package namespace(s). Also another thing that seems to be quite popular (i.e. makes testing much easier) in other CLI toolsis to have methods (2) (instead of functions) for everything from completion, to validation, and even for and running (8) the commands.
This is the approach taken by:
Here are some articles (4) on the topic (5).
A way to avoid this refactoring, would be to look into annotations (6), but, tbh, I can not find many examples in upstream project going that path.
Now, to follow what the cobra library and kubectl do for testing, we could start by breaking down this task and first take the business logic that does not stricly belong to Run() (7), and move it into more appropriate functions (or even directly to methods) as Validate (2), etc.
Once we're done with that, we might move on to actually implement test cases.
I'll go ahead an create a stub to test out some simple part of
omc
, and we can value from there whether this is worth the refactoring or not, how to approach it, and all the details we should think of before actually proceeding.Any feedback is appreciated!
References: (1), (2), (3), (4), (5), (6), 7, (8)
The text was updated successfully, but these errors were encountered: