-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added Control of link up events by interruptions #136
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Ktmi, overall looks good, it's nice that you also took the opportunity to augment some of the UI components too. I asked a few minor changes though. Also:
1 - Let me confirm the understanding regarding #62, looks like this one here will only be covered when the on_*_maintenance_*
methods are nuked, which we will only be able to replace when the rest of the topology.interruption.start|end
producers (maintenance
in this case) is in place, right? Do you want to create a new task to make this explicit or will you reuse the same one?
2 - Once the comments are addressed, I'd be OK merging this one here since topology.interruption.start|end
producer hasn't landed yet, so it should be safe, but before also merging, let me know which local test you minimally explored/ran.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Asked changes on https://github.com/kytos-ng/topology/pull/136/files#r1231527760
self.handle_interruption_start(event) | ||
|
||
def handle_interruption_start(self, event: KytosEvent): | ||
"""Deals with the start of service interruption.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Ktmi considering the point that @italovalcy has also brought to the attention on yesterday's meeting, maybe on both handle_interruption_start
and handle_interruption_end
it would good points to centralize log.info
entries, after all, we're also using log.info
for port status changes, so here it'd reasonable too. Another benefit of centralizing here on topology
is that it could simplify certain logs for interruption sources. Wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Ktmi let me know about this one here, if you agree and also if it'll be part of this PR or a subsequent one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Ktmi, I'll leave this PR pre-approved, looks good.
I've also taken the opportunity to explore it locally, I ran these tests:
- I confirmed that with liveness it wouldn't redundantly notify up again when it gets enabled.
- I also use the same link-flap script that I used on my previous topology PR, no anomalies observed, the final states were as expected.
Once e2e tests are passing and if your local exploratory-ish tests are passing as well we can merge this. Btw, I raised a new point regarding introducing a log.info
to get your thoughts on that, but that can also get delivered in a subsequent PR if we want to add it.
I haven't run e2e tests yet as I kept realizing several potential issues with consistency with link activation/deactivation, that I ended up rewriting a large portion of topology, and am starting to creep into other NApps. A few notes on the matter:
|
@Ktmi, I ran e2e on this PR that's on top of your branch, it's passing. It's great that you're analyzing and putting thoughts into it, but as usual, let's keep moving and addresses tasks based on their priorities and leave for
Right. But, let's leave this one for
Topology
Indeed |
Yeah, I guess we can put those additional changes for a later release, and just push out what we have already. For now, lets get this merged. |
I'm an idiot, the issue was on maintenance side. The maintenance events were using an incorrect endpoint. |
Good morning, let's merge this one then, and then once MW PRs is also ready we can make a decision if it'll be on |
This PR adds in an event listener for the new interruptions API to allow kytos-ng/topology to produce the appropriate link up and down events, as well as the topology notify update event. Indirectly, this should resolve the following issues
handle_switch_maintenance_end
is assuming interface(s) will go up when maintenance ends #62notify_topology_update()
#63notify_topology_update()
handle_link_maintenance_start
is using newLink
s references that don't exist onself.links
#64Additional Notes
While I have added visibility to
status
andstatus_reason
into the UI for Links and Switches, this feature is missing from Interfaces. Additionally, changes to the status/status_reason are not registered unless the page is reloaded. This applies to changes caused by enabling/disabling a device, or causing an interruption at the device.