Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recipes/ENRT/ConfigMixins: Add FirewallMixin.py #364

Merged
merged 2 commits into from
Mar 21, 2024

Conversation

SirPhuttel
Copy link
Contributor

@SirPhuttel SirPhuttel commented Mar 9, 2024

The set of classes contained in this module enable users to push
firewall rulesets onto the hosts used for test runs.

There are five flavors to choose from for the actual firewall
implementation, with descriptive names:

  • NftablesMixin
  • IptablesMixin
  • Ip6tablesMixin
  • EbtablesMixin
  • ArptablesMixin

To use the provided functionality, inherit one of them and either
provide a dictionary of "host: ruleset" entries in a property
'firewall_rulesets' or overwrite the 'firewall_rulesets_generator()'
method with a generator yielding as many dictionaries a desired for
performing a test run with. The latter provides a simple means of
testing with gradually increasing or extending rulesets.

In order to perform post-processing on rulesets after a test run,
override the 'firewall_rulesets' setter method.

Changes since v1:

  • Make use of ABC and abstractmethod
  • Use more modern f"..." format strings
  • Explicitly delete stored subconfig variables to free memory
  • Use 'rulesets' where a hash of {host: ruleset} is assigned
  • Keep stored rulesets in config object, too
  • Merge BulkFirewallMixin into FirewallMixin
  • Rename Ip46tablesMixin to IptablesBaseMixin
  • Fix for buggy _extract_tables_n_chains() implementation, also
    skip empty ruleset lines
  • Dump and restore rule counter values along with iptables rulesets
  • Implement FirewallControl, a class to instantiate on agents
  • Eliminate the use of temporary files
  • Update 'firewall_rulesets' attribute after each test

Tests

I wrote a few hello world recipes for testing, they're here:
SirPhuttel@ea21b0e

Bugs

Overriding a parent's class's property is not easy if a setter implementation
is also present. For instance, when inheriting NftablesMixin and only
overriding the getter, it has to be decorated using
'@NftablesMixin.firewall_rulesets.getter' instead of just '@Property'. When
overriding both getter and setter though, the expected '@Property' and
'@firewall_rulesets.setter' suffice.

Reviews

@jtluka
@enhaut
@olichtne
(Thanks in advance!)

@jtluka jtluka requested review from olichtne and jtluka March 11, 2024 07:31
Copy link
Member

@enhaut enhaut left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a minor comments from my side, overall looks good to me.

lnst/Recipes/ENRT/ConfigMixins/FirewallMixin.py Outdated Show resolved Hide resolved
lnst/Recipes/ENRT/ConfigMixins/FirewallMixin.py Outdated Show resolved Hide resolved
lnst/Recipes/ENRT/ConfigMixins/FirewallMixin.py Outdated Show resolved Hide resolved
lnst/Recipes/ENRT/ConfigMixins/FirewallMixin.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@jtluka jtluka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Besides some minor issues, I think this looks ok.

@olichtne
Copy link
Collaborator

Is there a test suite I could integrate any unit tests for the config mixin?

not really... this lnst repository doesn't really have "self tests" we run all of our recipes regularly and typically also use those to schedule a test run in beaker with the modified code from the PR to provide a proof of testing.

The applied ruleset should be dumped

what exactly do you mean by dumped?

do you want to print the information for "debug purposes" or just because it could be "interesting"? or is this information IMPORTANT in some way and should be relevant for the PASS/FAIL of a recipe?

This also provides context for what to do with

I wrote a few hello world recipes for testing, they're here: SirPhuttel@630fb76

as if these are eventually meant to do some form of MEASUREMENT which we want to regularly evaluate... we'd want to look into integrating these tests into the lnst.Recipes.ENRT test set

If these are "significantly" different, we probably want to create a new test set .... lnst.Recipes.Firewalls for example...

lnst/Recipes/ENRT/ConfigMixins/FirewallMixin.py Outdated Show resolved Hide resolved
lnst/Recipes/ENRT/ConfigMixins/FirewallMixin.py Outdated Show resolved Hide resolved
lnst/Recipes/ENRT/ConfigMixins/FirewallMixin.py Outdated Show resolved Hide resolved
lnst/Recipes/ENRT/ConfigMixins/FirewallMixin.py Outdated Show resolved Hide resolved
lnst/Recipes/ENRT/ConfigMixins/FirewallMixin.py Outdated Show resolved Hide resolved
lnst/Recipes/ENRT/ConfigMixins/FirewallMixin.py Outdated Show resolved Hide resolved
lnst/Recipes/ENRT/ConfigMixins/FirewallMixin.py Outdated Show resolved Hide resolved
lnst/Recipes/ENRT/ConfigMixins/FirewallMixin.py Outdated Show resolved Hide resolved
@SirPhuttel
Copy link
Contributor Author

SirPhuttel commented Mar 12, 2024 via email

@SirPhuttel
Copy link
Contributor Author

SirPhuttel commented Mar 12, 2024 via email

@olichtne
Copy link
Collaborator

You meant manual call of unlink(), right? Seems sensible, though I don't
see the benefit (but I see everything through procedural goggles
anyway).

yes, i mean the manual call of unlink()

The benefit is that in case there's an unhandled exception raised somewhere for some reason, the with ... as fd: block automatically takes care of the cleanup of the relevant objects.

When you're calling the unlink function manually you'd properly have to do a try-except-finally code structure and call the unlink from the finally block to ensure that it gets called.

Hmm. I started with just FirewallMixin and implemented the bulk variant in a second step. The non-bulk variant just used self.firewall_rulesets,

yeah, i think the non-bulk variant is simply a special case of the bulk variant which is why it could be merged together.

Sorry for the buggy code

no worries, this is why we do reviews :)

BTW: I'm doing this temp file thing merely because it's not possible to
feed data via stdin to processes spawned by host.run(). But given the
multiple layers and possible unwanted side-effects of feeding large data
this way I guess the temp file method is the better choice, anyway.

yeah... at some point it is a bit more beneficial to write a module/class that runs directly on the agent instead of running code on the controller and sending data to the agent, i think this isn't very well documented but you can do this:

class MyCustomCodeClass():
    def method1():
        logging.debug("this runs on the agent")

class MyRecipe():
    def test():
        host1_custom_class = self.host1.init_class(MyCustomCodeClass)
        host1_custom_class.method()

more complex example can be seen in: https://github.com/LNST-project/lnst/blob/master/lnst/Recipes/ENRT/BasePvPRecipe.py#L144

which uses this "code class": https://github.com/LNST-project/lnst/blob/master/lnst/RecipeCommon/LibvirtControl.py

@olichtne
Copy link
Collaborator

Logged or stored for later.

do you want to print the information for "debug purposes" or just because it could be "interesting"? or is this information IMPORTANT in some way and should be relevant for the PASS/FAIL of a recipe?

Users may want to see stateful ruleset data (e.g., counter or quota
values) after the test run has finished, to verify correctness of the
test or qualify results. They may be used to check e.g. how many packets
were offloaded.
I didn't find other recipes/mixins collecting data after the fact (aside
from the obvious perf/ping results). Is this just off-topic or do tests
collect such data by themselves?

they currently don't... but if we want to do something with the data from the
dump we should look into how we can do that instead of just printing into logs
which then need to be looked at in a next step AFTER the test, whether that's
automatic or manual by a user...

e.g. if a test result is OBVIOUSLY FAIL because "a zero was reported with
counter X" then we should IMO look into how to include this into the test
itself.

Accept a string in exec_cmd() parameter 'stdin' to feed into the called
process.
Copy link
Collaborator

@olichtne olichtne left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one last comment, otherwise this looks good to me.

The set of classes contained in this module enable users to push
firewall rulesets onto the hosts used for test runs.

There are five flavors to choose from for the actual firewall
implementation, with descriptive names:

* NftablesMixin
* IptablesMixin
* Ip6tablesMixin
* EbtablesMixin
* ArptablesMixin

To use the provided functionality, inherit one of them and either
provide a dictionary of "host: ruleset" entries in a property
'firewall_rulesets' or overwrite the 'firewall_rulesets_generator()'
method with a generator yielding as many dictionaries a desired for
performing a test run with. The latter provides a simple means of
testing with gradually increasing or extending rulesets.

In order to perform post-processing on rulesets after a test run,
override the 'firewall_rulesets' setter method.
@olichtne
Copy link
Collaborator

if there's no other comments from @jtluka tomorrow i can merge this in.

@olichtne olichtne merged commit bd9fe36 into LNST-project:master Mar 21, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants