You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AKA the Web Archive. We don't have any intentional support yet (as of the latest version of the code), so stuff is probably broken. Things to keep in mind:
Contents of image URLs generally aren't archived because the _0 suffix isn't actually used on-page at all. Since we want to download the original-res versions, we can just extract the original URL out of the one that's linked on the page — Bandcamp doesn't appear to take down old artworks from its media server.
How should we deal with the fact that tracks are probably archived on different dates? Just dating the album (or the discography!) would be best for ease of using the output files, but it'd be missing information on when exactly each file came from. We could just automatically suggest using --verbose for Web Archive downloads, and specifically log the pages which we're reading the image URLs from (again, fetching archives).
Download discography banner and wallpaper! #5 is cool but we'd like to be able to download wallpaper/banner from archived album or track pages, too — which might be available when the main discography isn't. There should maybe be a CLI option for that. (This would support putting those files under a date-prefixed discography folder...)
The text was updated successfully, but these errors were encountered:
AKA the Web Archive. We don't have any intentional support yet (as of the latest version of the code), so stuff is probably broken. Things to keep in mind:
_0
suffix isn't actually used on-page at all. Since we want to download the original-res versions, we can just extract the original URL out of the one that's linked on the page — Bandcamp doesn't appear to take down old artworks from its media server.--verbose
for Web Archive downloads, and specifically log the pages which we're reading the image URLs from (again, fetching archives).The text was updated successfully, but these errors were encountered: