Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[27.1] test spring-cleaning #5270

Merged
merged 1 commit into from
Jul 19, 2024

Conversation

thaJeztah
Copy link
Member

This makes a quick pass through our tests;

Discard output/err

Many tests were testing for error-conditions, but didn't discard output. This produced a lot of noise when running the tests, and made it hard to discover if there were actual failures, or if the output was expected. For example:

=== RUN   TestConfigCreateErrors
Error: "create" requires exactly 2 arguments.
See 'create --help'.

Usage:  create [OPTIONS] CONFIG file|- [flags]

Create a config from a file or STDIN
Error: "create" requires exactly 2 arguments.
See 'create --help'.

Usage:  create [OPTIONS] CONFIG file|- [flags]

Create a config from a file or STDIN
Error: error creating config
--- PASS: TestConfigCreateErrors (0.00s)

And after discarding output:

=== RUN   TestConfigCreateErrors
--- PASS: TestConfigCreateErrors (0.00s)

Use sub-tests where possible

Some tests were already set-up to use test-tables, and even had a usable name (or in some cases "error" to check for). Change them to actual sub- tests. Same test as above, but now with sub-tests and output discarded:

=== RUN   TestConfigCreateErrors
=== RUN   TestConfigCreateErrors/requires_exactly_2_arguments
=== RUN   TestConfigCreateErrors/requires_exactly_2_arguments#01
=== RUN   TestConfigCreateErrors/error_creating_config
--- PASS: TestConfigCreateErrors (0.00s)
    --- PASS: TestConfigCreateErrors/requires_exactly_2_arguments (0.00s)
    --- PASS: TestConfigCreateErrors/requires_exactly_2_arguments#01 (0.00s)
    --- PASS: TestConfigCreateErrors/error_creating_config (0.00s)
PASS

It's not perfect in all cases (in the above, there's duplicate "expected" errors, but Go conveniently adds "#1" for the duplicate). There's probably also various tests I missed that could still use the same changes applied; we can improve these in follow-ups.

Set cmd.Args to prevent test-failures

When running tests from my IDE, it compiles the tests before running, then executes the compiled binary to run the tests. Cobra doesn't like that, because in that situation os.Args is taken as argument for the command that's executed. The command that's tested now sees the test- flags as arguments (-test.v -test.run ..), which causes various tests to fail ("Command XYZ does not accept arguments").

# compile the tests:
go test -c -o foo.test

# execute the test:
./foo.test -test.v -test.run TestFoo
=== RUN   TestFoo
Error: "foo" accepts no arguments.

The Cobra maintainers ran into the same situation, and for their own use have added a special case to ignore os.Args in these cases; https://github.com/spf13/cobra/blob/v1.8.1/command.go#L1078-L1083

args := c.args

// Workaround FAIL with "go test -v" or "cobra.test -test.v", see #155
if c.args == nil && filepath.Base(os.Args[0]) != "cobra.test" {
    args = os.Args[1:]
}

Unfortunately, that exception is too specific (only checks for cobra.test), so doesn't automatically fix the issue for other test-binaries. They did provide a cmd.SetArgs() utility for this purpose https://github.com/spf13/cobra/blob/v1.8.1/command.go#L276-L280

// SetArgs sets arguments for the command. It is set to os.Args[1:] by default, if desired, can be overridden
// particularly useful when testing.
func (c *Command) SetArgs(a []string) {
    c.args = a
}

And the fix is to explicitly set the command's args to an empty slice to prevent Cobra from falling back to using os.Args[1:] as arguments.

cmd := newSomeThingCommand()
cmd.SetArgs([]string{})

Some tests already take this issue into account, and I updated some tests for this, but there's likely many other ones that can use the same treatment.

Perhaps the Cobra maintainers would accept a contribution to make their condition less specific and to look for binaries ending with a .test suffix (which is what compiled binaries usually are named as).

(cherry picked from commit ab23024)

This makes a quick pass through our tests;

Discard output/err
----------------------------------------------

Many tests were testing for error-conditions, but didn't discard output.
This produced a lot of noise when running the tests, and made it hard
to discover if there were actual failures, or if the output was expected.
For example:

    === RUN   TestConfigCreateErrors
    Error: "create" requires exactly 2 arguments.
    See 'create --help'.

    Usage:  create [OPTIONS] CONFIG file|- [flags]

    Create a config from a file or STDIN
    Error: "create" requires exactly 2 arguments.
    See 'create --help'.

    Usage:  create [OPTIONS] CONFIG file|- [flags]

    Create a config from a file or STDIN
    Error: error creating config
    --- PASS: TestConfigCreateErrors (0.00s)

And after discarding output:

    === RUN   TestConfigCreateErrors
    --- PASS: TestConfigCreateErrors (0.00s)

Use sub-tests where possible
----------------------------------------------

Some tests were already set-up to use test-tables, and even had a usable
name (or in some cases "error" to check for). Change them to actual sub-
tests. Same test as above, but now with sub-tests and output discarded:

    === RUN   TestConfigCreateErrors
    === RUN   TestConfigCreateErrors/requires_exactly_2_arguments
    === RUN   TestConfigCreateErrors/requires_exactly_2_arguments#01
    === RUN   TestConfigCreateErrors/error_creating_config
    --- PASS: TestConfigCreateErrors (0.00s)
        --- PASS: TestConfigCreateErrors/requires_exactly_2_arguments (0.00s)
        --- PASS: TestConfigCreateErrors/requires_exactly_2_arguments#01 (0.00s)
        --- PASS: TestConfigCreateErrors/error_creating_config (0.00s)
    PASS

It's not perfect in all cases (in the above, there's duplicate "expected"
errors, but Go conveniently adds "#1" for the duplicate). There's probably
also various tests I missed that could still use the same changes applied;
we can improve these in follow-ups.

Set cmd.Args to prevent test-failures
----------------------------------------------

When running tests from my IDE, it compiles the tests before running,
then executes the compiled binary to run the tests. Cobra doesn't like
that, because in that situation `os.Args` is taken as argument for the
command that's executed. The command that's tested now sees the test-
flags as arguments (`-test.v -test.run ..`), which causes various tests
to fail ("Command XYZ does not accept arguments").

    # compile the tests:
    go test -c -o foo.test

    # execute the test:
    ./foo.test -test.v -test.run TestFoo
    === RUN   TestFoo
    Error: "foo" accepts no arguments.

The Cobra maintainers ran into the same situation, and for their own
use have added a special case to ignore `os.Args` in these cases;
https://github.com/spf13/cobra/blob/v1.8.1/command.go#L1078-L1083

    args := c.args

    // Workaround FAIL with "go test -v" or "cobra.test -test.v", see docker#155
    if c.args == nil && filepath.Base(os.Args[0]) != "cobra.test" {
        args = os.Args[1:]
    }

Unfortunately, that exception is too specific (only checks for `cobra.test`),
so doesn't automatically fix the issue for other test-binaries. They did
provide a `cmd.SetArgs()` utility for this purpose
https://github.com/spf13/cobra/blob/v1.8.1/command.go#L276-L280

    // SetArgs sets arguments for the command. It is set to os.Args[1:] by default, if desired, can be overridden
    // particularly useful when testing.
    func (c *Command) SetArgs(a []string) {
        c.args = a
    }

And the fix is to explicitly set the command's args to an empty slice to
prevent Cobra from falling back to using `os.Args[1:]` as arguments.

    cmd := newSomeThingCommand()
    cmd.SetArgs([]string{})

Some tests already take this issue into account, and I updated some tests
for this, but there's likely many other ones that can use the same treatment.

Perhaps the Cobra maintainers would accept a contribution to make their
condition less specific and to look for binaries ending with a `.test`
suffix (which is what compiled binaries usually are named as).

Signed-off-by: Sebastiaan van Stijn <[email protected]>
(cherry picked from commit ab23024)
Signed-off-by: Sebastiaan van Stijn <[email protected]>
@thaJeztah thaJeztah added this to the 27.1.0 milestone Jul 19, 2024
@thaJeztah thaJeztah self-assigned this Jul 19, 2024
@thaJeztah thaJeztah requested review from silvin-lubecki and a team as code owners July 19, 2024 11:38
@thaJeztah
Copy link
Member Author

LOL; the noise on test-failures was annoying me, and I initially thought we had some new failure, but it was .. just noise; cherry-pick was clean, so let's include this in this branch.

@codecov-commenter
Copy link

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 61.49%. Comparing base (a2a0fb7) to head (ca9636a).

Additional details and impacted files
@@            Coverage Diff             @@
##             27.0    #5270      +/-   ##
==========================================
+ Coverage   61.47%   61.49%   +0.01%     
==========================================
  Files         299      299              
  Lines       20833    20833              
==========================================
+ Hits        12808    12811       +3     
+ Misses       7115     7112       -3     
  Partials      910      910              

@thaJeztah thaJeztah requested a review from vvoland July 19, 2024 12:35
@thaJeztah thaJeztah merged commit bccd478 into docker:27.0 Jul 19, 2024
98 checks passed
@thaJeztah thaJeztah deleted the 27.1_backport_test_spring_cleaning branch July 19, 2024 13:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants