-
-
Notifications
You must be signed in to change notification settings - Fork 505
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: modules as plugins #1002 #3062
base: main
Are you sure you want to change the base?
Conversation
b76cf21
to
64aa6b5
Compare
What's the intent in terms of bundling modules in the future? Should we have multiple download options, such as:
In terms of the UI, I'm thinking we use the Connections tab for inspiration.
I will want to talk about data structures once #3045 is merged. I think the cache should be in |
I think that having multiple download options will add more confusion and complexity than it is worth. My view is that we should always include 'common' modules. And the versions bundled in the betas will be updated much less often, maybe as infrequently as updated just before a stable release. I'm not yet sure if it should be possible to include modules inside exports, or if they need to be installed separetely (for online users, this could be automatic)
I think that makes sense. That list could have icons to indicate what 'types' of build each module has. I'll give this a try, this is just the inspiration I need to have some direction :)
The code is still in the 'getting things working' stage, and I haven't thought much about the cache other than knowing what I am doing is temporary. Tbh, until there is an api to scrape, I don't have any idea how the fetched data will look, or if I am assuming it will provide more in the first level than it does. |
781b691
to
31cc6f0
Compare
Thank you Julian for digging in this super long standing request.
According to the open questions:
I'd suggest to show a hint in the module list of the management tab, maybe a little up arrow or something. On the tab header we could have a dot with the number of updates available
That's where the mass management is a benefit. You are online once, download all modules and go to the job. Also the user accessible module directory helps for users which are installing completely offline. They can install Companion offline and then just copy/unzip a bundle with all the needed modules to the directory.
I'd say yes. Then the only difference between a builtin module and all other modules is that the builtins are always delivered with the core application.
Why should there be any difference from today? I see that the legacyId sytem is not super robust and there might be an increase of problems if the user can downgrade modules more easily. Do you have any particular problems in mind?
My suggestion would be to always bundle the internal modules and that's it, no different installers for the core application. But at the same time there needs to be a convenient download possibility for a "all modules bundle", so you can grab them e.g. as a zip archive and then just drop them in the module directory with only one additional step if you are doing a complete offline installation. |
And another thing is coming to my mind right now. Today we are talking about modules for connections. Maybe in the future we want to have plugins also for surfaces, APIs, Themes, Language-Packs... |
@dnmeid Most of what you mentioned above @Julusian and I have been discussing in Slack, so I'll add some of that here. I tend to agree that the Modules tab should be more of a 'settings' aspect of the workflow instead of a stopping point in getting a working installation going (except for maybe offline users). Custom having its own section was discussed and conflicts related to version numbers in custom vs published releases was a main sticking point to keeping them clearly separated. I also think it would be better for the "built in" modules to not necessarily show up in the "installed" list; with my preference being they install on demand as part of creating a connection, and I thought it was good option to allow JIT install of a newer "discovered" version of a module at the time of creating the connection. But none of that negates the need to have the Modules tab to purge unused versions, upload custom versions, or bulk upload a module bundle. Part of not installing the built-in modules was so that any used modules would copy over as being installed on use so that they are retained on a core update. I'm not sure yet what the module upgrade notifications should look like yet, but that's definitely going to be a part of this somehow. The offline workflow is going to be the more interesting aspect. I don't like the concept of bundling "commonly used" modules because its doubtful that's going to 100% cover offline users' needs anyway. My thoughts have been to have a 'minimal' and 'full' version (minimal having no modules and full being what we do today). Those could just as easily be called 'online' and 'offline'. Or we go with no bundled modules and have an offline module bundle download. I'm good with either. But the notion that someone can "sync up" after they download before they take the system back offline is not workable for my use-cases because we need the stable installs on a shared drive for the offline systems. Those would not have an opportunity to grab modules from the online system right after downloading and installing. Julian and I have discussed a module approach for Surfaces as a start, so that has been in the back of our minds during this. Services would be next on the list after that. |
I don't feel strongly on this, just that later on surfaces will also use a similar 'plugin' system. So if we call this plugins, would the surface ones share the ui, or be called something else? There is already some confusion from users over how to enable additional surface types.
Good point, I don't see why not. Maybe it might be a little confusing as I'm not sure if we can show the module help until we have installed the module (the store api is very unknown at this point)
This is my current reasoning for it. If we have a module on disk claiming to be 'v1.1.1' is that the same as the version on the store, or is that a user modified/custom build? That affects how we should treat it in the ui. If its the same as the store then we shouldn't offer to download that version. If its different then we should.
I disagree with this, I don't want this directory to be accessible.
Yeah, not something I've thought about. I think this should be a follow up PR, none of it sounds critical to making this PR useful.
I'm not sure that it is. Even with
I could write a lot on thoughts I have about this, but trying to keep it short; I haven't really thought too much about when/how we push connections across from one module to another, because being able to pin connections to a specific version of a module makes that harder (especially as we should allow going backwards across the rename, just like we do for within a module).
The default/recommended way of adding a module will be to leave it set to 'use latest version'. Meaning that most of the time, when you update it then wont try to use the old versions. I propose that modules should be copied into the same store modules path just before they are first used.
My concern here is that the number of modules is only going to increase. Currently we are at I think ~400, taking 280MB on disk, over 7400 files. (that size might jump by ~35MB if we repackage the legacy modules to no longer be handled in a special way) If we are ok with shipping these 400, is there a point where we will say that there are too many? That said, I think we need to keep them all in for a couple of releases. That might be necessary anyway to ensure that we aren't deleting modules out from under people. But a question I have, while there is a simplicity to being able to load in 'all modules', is that something that is actually useful? Surely you already know what kit you have beforehand, so you can narrow it down to a small list of modules? I should add, one thing I am leaning towards is that full exports probably wouldn't package up the modules they use, which could make doing a full import on a new machine painful if we dont ship all the modules (although those could be the wrong/old versions). |
I still think there should be no difference.
We can't avoid that users want to install module x.y.z that they have obtained somewhere and that may have the same version number although it may or may not have different code. We can't guarantee that version numbers in the store are always correct either.
Yeah, for me that is a must have. I would roughly divide the Companion users in two groups here: 1. Users with a slow changing set of devices, like studios, streamers, churches, home-users, small companies with limited scope; 2. Users with dynamically changing devices, like freelance technicians, full service companies, rental companies. Another thing just came to my mind: do we want to show module vs. core compatibility anywhere? Version numbers are one thing but the used version of the base-module / API is different. |
What I am afraid of is a user reporting a bug in 1.2.3 of a module, the maintainer then spending some hours trying to reproduce the issue and failing, because the user was running a modified build of the module which caused the bug. Maybe they modified it, maybe someone else they work with did and didn't tell them. But it sounds like this separation has to go anyway to make the bundle import work with similar ability to installing from the store. If I really want to keep it, perhaps adding another file to each module which contains checksums of the other files would be enough to cause it to be flagged as modified unless someone really wanted to avoid that.
That is a good point to consider here. When installing from the store companion will know what api versions it supports, so it doesnt offer to install ones which it knows are incompatible. Same when importing a single module downloaded in the browser, we just need to make sure the browser version can give users good indicator of version support. For the 'module-packs', I guess they will need to be versioned for what version of companion it targets. To keep it simple, use the same major+minor version numbers. We could either generate them at the same time as doing a release of companion, or just follow their own cycle but under the same numbering. |
That's why I say we should rise awarenes of the developers for that topic. I think the users that do modify the code themselves won't file a bug report for the modified version. The problem is developers sending out modules with modified code and the same version number, so they are shooting themselves in the foot. |
wip: remove separation of custom versions wip: install when adding wip: split out file wip: move modules into shared dir wip: implement install latest wip: module download headers wip: better error handling wip: format versions table dates better wip: improve refresh and last updated styling wip: some better error handling wip: use url routing for modules manager wip wip: tidy fix: module filter producing bad results wip: docs wip: add file on disk to indicate that store version is a prerelease wip: refactor wip: versions visibility wip: refactor visibility header buttons to avoid duplication wip: better tables wip: implement things a little more wip: start of module manage page wip: start of per module data fetching wip: remove unused files wip: change types wip: remove old modules poc ui wip: some ui boilerplate wip: move cache storage into cache wip: some refinement of modules discover tab wip: new ui to discover and install modules wip: add import module button wip: new ui design fix: reimplement reloading dev module (untested) wip: some more flow wip: some crude ui to install, backend to install modules wip: crude start of module discover ui wip: start of backend to scrape module store fix: add warning wip: present as proper table wip: fix module adding wip: adding and removing custom modules wip: first version of importing custom modules wip: alternate list of all modules wip: fixup module list page a bit wip: fix help modal fix: incompat warning wip: config fixup wip: choose module versions to use wip: reworking (again) wip: try using specific version when adding wip: better wip: attempt at ui for adding modules wip: something wip: change to single dev module wip: wip: list versions in ui wip: Some ui wip: pump more data to the ui wip: remove separation between legacy and bundled wip: something ui
Until we have a store, or a bundle to import we need the builtin modules This reverts commit 34c26c5b1a0f1c1989ad08af2986db65911a5757.
4fbe42d
to
3c73fa1
Compare
I'm struggling to figure out how to present this in a vaguely sane way.
It is also worth noting that installing a new version could affect other modules, but that can be indicated with a warning shown below the dropdown when one of those versions is selected I don't know how to present this in a dropdown that doesn't feel confusing/cluttered. So I am tempted to treat this as two different scenarios.
If the user wants other versions, they can do that in the modules tab where we have space and options on how to present this in a meaningful manner. |
I am getting near to the end of my todo list, and one of the chunks I haven't reimplemented yet is the legacyId logic. The problems I see with the current system are:
None of this sounds like a dealbreaker, just some edge cases to be careful of. The downsides I see with moving it away from the manifest:
I think this one will be a big problem, with offline being considered a common scenario. I guess for now I have no strong argument for changing it given this list, so for now I won't change it |
Afair our legacyId system doesn't rewrite the instance type in the db, right? It just reinterprets the instances on the fly and hides the legayc module. So one module can take over several older ids (n:1) but one old module can't be taken over by several newer ones (1:n). So we actually have one edge case right now, if multiple modules declare themself to be the successor of one module. I guess the latest one wins then. For the normal workflow I think we will need a replacement now on a per module basis and not per system. E.g. If I think it is save to show all versions of I think I didn't got the deprecation point yet. If you want to deprecate a module, I think you have to update a field in the manifest. Does that mean you have to do a new version for it? Actually yes, otherwise there could be the same version with or without the deprecated flag. Or do you deprecate older versions in the manifest like "with this version I declare 1.0 deprecated"? Or is deprecation stored somewhere else? |
No, it does update the instance type in the db.
Yep, which one wins is not particularly deterministic, it depends on the order of keys in a map.
Yeah, which is why we need to make sure that we can easily figure out which other modules which either replace or are replaced by the current module.
So far the extent of my thought on this chunk is 'it needs to be possible to upgrade and downgrade across id changes'. I am also worried about the confusion it could add to the version picker, as we will need to differentiate versions with the old module name. Maybe the
I think that deprecation is a property of the store, and not of module packages. I see this as we can deprecate a module, typically when it has been replaced by another. And see two modes for deprecating a version of a module.
Some common scenarios where deprecation would be used:
Your Barco PDS example; I feel like that shouldnt be marked as deprecated. It's still perfectly valid to use it, even though it likely wont see any updates Renaming modules would be fine with just deprecating the module, and adding the legacyId. We definitely need a way of being able to hide the old module when adding a new connection, but at the same time it still needs to exist there so that the update logic can find it and figure out what to do. |
Exactly, that is also what I mean with "cross-module" upgrades. There is a general problem but this is also already existing today: we don't know at what point a module did an upgrade script and if that upgrade script made a breaking change. But I also think that we should allow downgrade to older versions even with possibly broken data because people will have a reason to downgrade and then it is better to live with maybe one broken action. I would really appreciate a more formalized version history for the modules that we can show in the store and also are able to show users the exact changes of a new version instead of just a version number notification.
Ah, agreed. That is a good point for deprecation, we want them to be downlodable if someone really needs but don't include in the bundle. |
in progress..
This needs some further UX thought, but the functionality is making progress.
Managing modules
There is a new 'modules' tab, which uses the split layout approach.
The left column lists all the modules that are installed, allowing selecting which one to manage. Some more details should be added to this table at some point.
It has a similar style of filtering as we use for the connections list. One of the options is to show all modules on the store
At the top of the page, you can upload a module package (a single module, perhaps from the module author, or downlaoded from the store), or a module bundle (multiple modules in one archive, for bulk importing).
The list from the store is refreshed when opening this view if it hasnt been updated in a while, and can be refreshed on demand by clicking the button.
Currently this is using a temporary 'api', pulling json files and module builds from https://github.com/Julusian/companion-module-repository-poc. This repo has been seeded with the module versions from a few releases of companion
In the right panel, when a module is selected, the table lists all the versions of the module either reported by the store, or already installed
You can install/uninstall versions here with the button on the left. That button changes to different icons if it can't be installed for some reason (eg, module-api is too new for companion or api doesn't give a download url)
The icons on the right indicate whether each of the module is in use.
The data for this table is fetched from the api separetely, so has its own refresh button.
Managing connections
When adding a connection, you can choose what version of the module to use. You can choose between the 'dev' version (if one is found), installing the latest stable version from the store, and any already installed stable versions.
On any connection, you can now open the right panel, even when disabled/broken. In here you can change the label of the connection, the version of the module that it is using and the update policy. This includes more versions than when adding the module.
Some additional behaviours:
TODO:
Should builtin modules be moved into the same store mdoules path, so that they persist when upgrading/downgrading?builtin modules will be removedlegacyId
flow work now?