Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

servarr download #288

Closed

Conversation

rraymondgh
Copy link
Contributor

@rraymondgh rraymondgh commented Jun 30, 2024

Functionality to trigger download using prowlarr / sonarr / radarr

  • adds functionality to go backend to trigger download request in prowler / sonarr / radarr
  • concept is these apps provide proxy to download client (transmission, qbittorrent etc)
  • graphql mutation to integrate backend with webui. extension to graphql query to enable webui to show/hide download button based on at least one of the servarr apps being configured
  • extension to config to allow servarr URLs and ApiKeys to be configured for use in backend
  • webui provides feed back through existing error feedback capability

Notes

  • has passed lint and test task
  • have found that nix develop environment needs following:
export XDG_CONFIG_HOME=~/.config/
export PATH=$PATH:$PWD/webui/node_modules/.bin

Enhancements / feedback consideration

  • both sonarr and radarr release GET endpoint does not filter by indexer. This means all indexers are searched. One option could to be temporarily disable other indexers before search and re-enable after search (would also speed up call to release endpoint)
  • logging - currently feedback is through errors that track back to webui. It maybe useful to send these to log as well as they are transitory in webui. *still finding fx injection a steep learning curve, logger could be injected into resolver.go
  • documentation :-)

servarr mutator
system query for download
injection of servarr config
@rraymondgh rraymondgh changed the title Graphql client download servarr download Jun 30, 2024
@mgdigital
Copy link
Collaborator

mgdigital commented Jul 2, 2024

Thanks for this, some initial feedback:

I think I misunderstood what you had in mind for this feature as it seems to be searching Sonarr and Radarr, when we'd already have the torrent we're after - it would just need sending to Prowlarr's client proxy. I think having that on its own would be a good first step for this feature. I wouldn't rule out some deeper integration with the other apps if it would add value, but if set up correctly Sonarr/Radarr should pick up the fact that you've downloaded a copy of something anyway? Could you maybe elaborate what you had in mind for the Sonarr/Radarr integration, perhaps it isn't needed for the first iteration of this feature?

I expect we'll expose this feature as a generic client proxy that can integrate with any suitable client (e.g. qBittorrent), so in terms of the API surface it should be more generic, so the GraphQL parts would probably be under a clients namespace that can expose different implementations and we'd need a base Client interface in the Go code that all clients can extend from.

Your client code is trying to do a lot at the moment, mainly because it's trying to be 3 different clients merged into 1. A good first step would be having that stripped back to do the most basic part of sending a torrent to Prowlarr. If there was going to be integration with Sonarr/Radarr too then they should probably be in separate client adapters or at least with the logic more decoupled.

With the web app rewrite underway let's probably look to merge the UI elements into that once that's further along...

@rraymondgh
Copy link
Contributor Author

I think I misunderstood what you had in mind for this feature as it seems to be searching Sonarr and Radarr, when we'd already have the torrent we're after - it would just need sending to Prowlarr's client proxy.

I should explain the concept better. The prowlarr search end point and sonarr / radarr release end points are quite synonymous. GET caches from configured indexers, POST sends specific info hash to configured download client. It's mandatory to cache (GET) before POST. (Not doing GET before POST for all three results in an error stating info hash is not cached).

The sonarr / radarr release GET request requires (seriesID, episodeID) or (movieID). Hence the GET requests to other end points to transform external ids that bitmagnet has to internal ids. One issue I have with release GET end point is that it's not possible to filter by indexer. Which means that in case of sonarr / radarr all indexers are searched (would be better if only bitmagnet was searched)

Just prowlarr would mean that you would need to configure backdoor functionality in sonarr / radarr to pick up "manually" downloaded content by download client.

I expect we'll expose this feature as a generic client proxy that can integrate with any suitable client (e.g. qBittorrent), so in terms of the API surface it should be more generic, so the GraphQL parts would probably be under a clients namespace that can expose different implementations and we'd need a base Client interface in the Go code that all clients can extend from.

Ok -will change from servarr namespace to client name space across graphql specs. Interface spec would effectively be:

type Client interface {
	Download(infoHashes []string)
}

From download capability of a client, I noted on discord. I see the *arrs as providing capability to interface to any client. I switched from deluge to qbittorrent to transmission. All that was needed after changing docker compose to install was to do a little config in the *arrs which have UI and testing capabilities. Granted I am maybe thinking too narrow, just download when it comes to a client.

Your client code is trying to do a lot at the moment, mainly because it's trying to be 3 different clients merged into 1. A good first step would be having that stripped back to do the most basic part of sending a torrent to Prowlarr. If there was going to be integration with Sonarr/Radarr too then they should probably be in separate client adapters or at least with the logic more decoupled.

There is maybe a use case bias I am holding here. I regularly find myself using manual search capabilities of sonarr and radarr as they have not found some content I want (configured size constraints or quality profile, configs which I want for auto downloads). When I see something in bitmagnet that I want, I want to trigger download and inclusion in tv / movie libraries. As previously noted functionality wise that are very synonymous, for sonarr / radarr need to transform from external ids to internal ids.

If in config only prowlarr api key is configure, prowlarr will be used for all content types. So from that perspective it's a user configuration not automatic that my bias is forced on all users.

I switched from starr to resty because I didn't like the amount of code duplication with separate sub-modules for prowlarr, sonarr and radarr. I'll have a look to extract a transform() function that is specific to each of the arrs

With the web app rewrite underway let's probably look to merge the UI elements into that once that's further along...

I'll move UI elements to a separate branch.

@mgdigital
Copy link
Collaborator

mgdigital commented Jul 2, 2024

The prowlarr search end point and sonarr / radarr release end points are quite synonymous. GET caches from configured indexers, POST sends specific info hash to configured download client. It's mandatory to cache (GET) before POST. (Not doing GET before POST for all three results in an error stating info hash is not cached).

Okay I haven't gone into depth on what these API's look like (and the documentation seems quite light).

If in config only prowlarr api key is configure, prowlarr will be used for all content types. So from that perspective it's a user configuration not automatic that my bias is forced on all users.

I switched from starr to resty because I didn't like the amount of code duplication with separate sub-modules for prowlarr, sonarr and radarr

I'm for minimizing duplication but having a lot of big if blocks in each method of a single implementation isn't a great trade off and should probably be refactored. What I'm suggesting is to start with a client that works by talking only to Prowlarr. The other integrations you've described could be considered later if there's wide interest, but it sounds like a bit of a specific behaviour that could live in a separate client adapter if it was needed. Otherwise the logical progression is to add Lidarr, Readarr etc in a single client with ever bigger if blocks checking the value of d.use. There's also the possibility of webhooks, custom feeds etc that might be a better way to hook into very bespoke behaviour...

This interface is just off the top of my head but it could be a start. It uses a context object and can return an error. I've put the request in a struct as it might need more parameters later:

type AddInfoHashesRequest struct {
  ClientID string
  InfoHashes []protocol.ID
}

type Client interface {
	AddInfoHashes(ctx context.Context, req AddInfoHashesRequest) error
}

@rraymondgh
Copy link
Contributor Author

rraymondgh commented Jul 3, 2024

Okay I haven't gone into depth on what these API's look like (and the documentation seems quite light).

Docs are quite light, have now coded in python, typesrcipt and go. So have become quite familiar with this specific capability of APIs. Explaining was very useful as it enabled me to see better ways to code it.

I'm for minimizing duplication but having a lot of big if blocks in each method of a single implementation isn't a great trade off and should probably be refactored. What I'm suggesting is to start with a client that works by talking only to Prowlarr. The other integrations you've described could be considered later if there's wide interest, but it sounds like a bit of a specific behaviour that could live in a separate client adapter if it was needed.

I've completely stripped out use of use. Now uses a struct per arr that has an implementation. Each in own source file. Have stuck with just one package servarr. There are now no if blocks for flow control, just error checking.

This interface is just off the top of my head but it could be a start. It uses a context object and can return an error. I've put the request in a struct as it might need more parameters later:

type AddInfoHashesRequest struct {
  ClientID string
  InfoHashes []protocol.ID
}

type Client interface {
	AddInfoHashes(ctx context.Context, req AddInfoHashesRequest) error
}

I've coded this up - and changed servarr to use this interface. I haven't yet got round to stripping out webui changes into a separate branch. For my dev purposes it's been useful that have UI to validate as I went along steps of refactoring.

@davidnewhall
Copy link

Is there more we can add to golift.io/starr to provide all the features you need? Hate to see you writing all this boilerplate code.

@mgdigital
Copy link
Collaborator

mgdigital commented Jul 4, 2024

Is there more we can add to golift.io/starr to provide all the features you need? Hate to see you writing all this boilerplate code.

I'd be happy in principle for it to be used. I haven't had an in-depth look at the code yet so I can't give a view on @rraymondgh 's reasons for not using it. It is likely that we'll need a fraction of the functionality available in the API's, and it can be preferable to have full control over a custom implementations of just the bits we need. I've ended up going in that direction with the TMDB integration for example. It probably depends if the Starr libs are extensible in the ways we'd need them to be (and I can't answer that without looking into it properly). In the case of TMDB we needed things like rate limiting, API key validation and retry logic that just couldn't be done nicely with the main TMDB client library for Go.

@davidnewhall
Copy link

Sorry that question was really directed toward @rraymondgh. He opened a feature request in that library, I assume for this implementation. I'm happy to expose or explain any additional functionality you could need.

@rraymondgh
Copy link
Contributor Author

Sorry that question was really directed toward @rraymondgh. He opened a feature request in that library, I assume for this implementation. I'm happy to expose or explain any additional functionality you could need.

Thanks for responding to that feature request. There are a few reasons I switched from starr to resty

  • end points that I'm using not being present, however this is not primary reason. Have also identified /indexer/bulk
  • a very minor point, would need to use githash versioning as don't want to use latest
  • taking IndexerResource as an example structure. I'm looking for a common definition across prowlarr, sonarr and radarr. I see the /indexer endpoint across the apps as being the same. The differences can be fully ignored (just not defined in the struct)

Taking third point - this then allows functional code that implements download capability that is 100% common across arrs. There is no need to do any type coalescence between radarr.IndexerOutput, sonarr.IndexerOutput and prowlarr.IndexerOutput. Plus no need to implement interfaces that use almost identical functions for each of the apps

I did experiment with generics before refactoring code to use resty I just could not get it to work. This reasoning is maybe flawed - I'm new to go, the type strict approach is great. Possibly I'm just not seeing how to deal with heterogeneous structs that have very large overlap in an effective way.

@davidnewhall
Copy link

davidnewhall commented Jul 5, 2024

I'm going to move this conversation here so I don't keep clogging up your PR discussion.

@rraymondgh
Copy link
Contributor Author

I'd be happy in principle for it to be used. I haven't had an in-depth look at the code yet so I can't give a view on @rraymondgh 's reasons for not using it. It is likely that we'll need a fraction of the functionality available in the API's, and it can be preferable to have full control over a custom implementations of just the bits we need. I've ended up going in that direction with the TMDB integration for example. It probably depends if the Starr libs are extensible in the ways we'd need them to be (and I can't answer that without looking into it properly). In the case of TMDB we needed things like rate limiting, API key validation and retry logic that just couldn't be done nicely with the main TMDB client library for Go.

I have refactored to use starr on another branch. Would appreciate your thoughts on which approach you prefer resty or starr

https://github.com/rraymondgh/bitmagnet/tree/starr_client_download/internal/servarr

  • removed overlap interface.go structs that are defined in starr
  • have extended an interface so overall control release.go just call functions on interface that then uses implementations in prowlarr.go, sonarr.go and radarr.go
  • @davidnewhall has been very active in extending starr to support endpoints being used and would cut a release so that bitmagnet is using a x.y.z version of starr
  • a minor downside - to get a standard definition of Indexer and Release there is recasting of structures (contained in recast.go).

@rraymondgh
Copy link
Contributor Author

rraymondgh commented Jul 27, 2024

Completely revamped to use new docker container. Example docker compose

services:
  arr-services:
    container_name: arr-services
    image: ghcr.io/rraymondgh/arr-interfaces:latest
    volumes:
      - ${HOME}/.config:/config
    restart: unless-stopped
    environment:
      - XDG_CONFIG_HOME=/config
    ports:
      - "3335:3335"
    command:
      - worker
      - run
      - --all

  • Have created a new graphql query, TorrentContentByID to enable fast query for getting content type and other attributes of an info_hash.
  • Have added rover to nix environment. Have added genqclient to tools. These together allow for integrating with arr-services graphql interface in container
  • there are now two first class download clients: qbittorrent and transmission other clients will be supported through the sonarr / radarr / prowlarr but will do search before grab

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants