You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Free workspace content older than one year will be deleted
Since there are size restrictions on data uploaded to Discord, the bot currently "imports" large files by uploading a thumbnail and linking to the original Slack-hosted file. If Slack is deleting old content, these links are probably going to expire as well.
Without finding somewhere else to host this data, the best case is that the data is downloaded locally so at least it's not lost. This could be done by providing another CLI entry point in this package that just downloads the data.
However, since just downloading the data doesn't solve the issue of the URL no longer working, ideally there would be some way to link to a non-Slack-hosted backup of this data instead. This would ensure that organizations that want to ensure that the data is preserved can download the data, host it somewhere, and have the import process generate links to those URLs instead.
Which would convert the URLs like https://files.slack.com/files-pri/XXXXXX-YYYYYY/some-file.ext to https://some-bucket.s3.amazonaws.com/XXXXXX-YYYYYY/some-file.ext when linking them.
The text was updated successfully, but these errors were encountered:
According to a recent email from Slack:
Since there are size restrictions on data uploaded to Discord, the bot currently "imports" large files by uploading a thumbnail and linking to the original Slack-hosted file. If Slack is deleting old content, these links are probably going to expire as well.
Without finding somewhere else to host this data, the best case is that the data is downloaded locally so at least it's not lost. This could be done by providing another CLI entry point in this package that just downloads the data.
However, since just downloading the data doesn't solve the issue of the URL no longer working, ideally there would be some way to link to a non-Slack-hosted backup of this data instead. This would ensure that organizations that want to ensure that the data is preserved can download the data, host it somewhere, and have the import process generate links to those URLs instead.
Maybe something like:
Which would convert the URLs like
https://files.slack.com/files-pri/XXXXXX-YYYYYY/some-file.ext
tohttps://some-bucket.s3.amazonaws.com/XXXXXX-YYYYYY/some-file.ext
when linking them.The text was updated successfully, but these errors were encountered: