You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Upgrading Sentry is basically a non-starter for many users relocating from self-hosted, as for a variety of reasons getting an upgrade all the way to the latest version across the finish line is very organizationally challenging for them. This is because it requires a layer of security approvals, since [email protected] has roughly 38 services vs ~5 for [email protected]. It also requires moving past multiple breaking change upgrades, re-configuring the PaaS architectures that many would-be relocators have, etc. It's a very large lift, and basically no one continues with the self-hosted relocation process if they're informed that they have to upgrade first.
This leaves us in a bit of a pickle: we're going to be making significant changes to the backup script and building a SaaS-ingestion pipeline around them, but the users that would most benefit from these changes can't leverage them and use the "easy" path we'll have created because they are blocked from upgrading in the first place.
To resolve this, I propose we bifurcate the "relocation" flow in two: one "self-serve" path for folks who are able to upgrade their self-hosted instance to the latest sentry, and one "SE-assisted" path for those who cannot.
The self-serve path is the one we have been discussing: you upgrade to the latest self-hosted version, create a JSON export of your self-hosted install using backup.py, upload it via a new API endpoint, we do all the validation using a validation database as we've discussed, etc etc etc.
For the SE-assisted route, we would instead do the following:
Use a pre-prepared standalone patch (plugin?) that modifies the source of their Sentry image** and adds a new API endpoint to the user's old self-hosted version. This would add a super-user only screen that essentially runs pgdump in the background and produces an image of their Postgres database for download. The user would then send the resulting image back to the SE.
The SE would start up a VM, load the user's version of self-hosted, reload the pgdump image they sent over, and do the migrations on their behalf. This ensures that all of our numbered database migrations are applied in correct order, etc.
Once the SE reaches the latest version on this VM, they would run the backup.py script as in the self-serve path, and continue the migration using the same endpoint that path uses (ensuring that all data is still validated and so on).
** This patch would take the form of a zipped set of files with an installing shell script. Basically, we would swap their sentry image with a slightly modified one that carries a new endpoint. We'd probably need several of these patches to account for changes between versions, but we could start with one for just Sentry 9 and see how it goes.
The text was updated successfully, but these errors were encountered:
Upgrading Sentry is basically a non-starter for many users relocating from self-hosted, as for a variety of reasons getting an upgrade all the way to the latest version across the finish line is very organizationally challenging for them. This is because it requires a layer of security approvals, since [email protected] has roughly 38 services vs ~5 for [email protected]. It also requires moving past multiple breaking change upgrades, re-configuring the PaaS architectures that many would-be relocators have, etc. It's a very large lift, and basically no one continues with the self-hosted relocation process if they're informed that they have to upgrade first.
This leaves us in a bit of a pickle: we're going to be making significant changes to the backup script and building a SaaS-ingestion pipeline around them, but the users that would most benefit from these changes can't leverage them and use the "easy" path we'll have created because they are blocked from upgrading in the first place.
To resolve this, I propose we bifurcate the "relocation" flow in two: one "self-serve" path for folks who are able to upgrade their self-hosted instance to the latest sentry, and one "SE-assisted" path for those who cannot.
The self-serve path is the one we have been discussing: you upgrade to the latest self-hosted version, create a JSON export of your self-hosted install using backup.py, upload it via a new API endpoint, we do all the validation using a validation database as we've discussed, etc etc etc.
For the SE-assisted route, we would instead do the following:
Use a pre-prepared standalone patch (plugin?) that modifies the source of their Sentry image** and adds a new API endpoint to the user's old self-hosted version. This would add a super-user only screen that essentially runs
pgdump
in the background and produces an image of their Postgres database for download. The user would then send the resulting image back to the SE.The SE would start up a VM, load the user's version of self-hosted, reload the
pgdump
image they sent over, and do the migrations on their behalf. This ensures that all of our numbered database migrations are applied in correct order, etc.Once the SE reaches the latest version on this VM, they would run the backup.py script as in the self-serve path, and continue the migration using the same endpoint that path uses (ensuring that all data is still validated and so on).
** This patch would take the form of a zipped set of files with an installing shell script. Basically, we would swap their sentry image with a slightly modified one that carries a new endpoint. We'd probably need several of these patches to account for changes between versions, but we could start with one for just Sentry 9 and see how it goes.
The text was updated successfully, but these errors were encountered: