You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, the inference pipeline has only been tested on small batches of URLs, like 150. Since we will need to run it on the millions of URLs that exist in the SDE, it will need to be able to run without overloading the server.
For this issue we need to do two things
test the pipeline on the bigger collections that will show current failure types
make any modifications necessary to the batch size processing necessary to run it successfully
have some code that can run this in batch on all our collections
Implementation Considerations
type your first consideration here
Deliverable
code to run on all our data
any updates to the batch process that are necessary
Dependencies
depends on
The text was updated successfully, but these errors were encountered:
Description
Right now, the inference pipeline has only been tested on small batches of URLs, like 150. Since we will need to run it on the millions of URLs that exist in the SDE, it will need to be able to run without overloading the server.
For this issue we need to do two things
Implementation Considerations
Deliverable
Dependencies
depends on
The text was updated successfully, but these errors were encountered: