You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Meta-Dataset in TFDS: [F tensorflow/core/platform/default/env.cc:73] Check failed: ret == 0 (11 vs. 0)Thread tf_data_iterator_resource creation via pthread_create() failed.
#84
When Training on Meta-Dataset episodes (with all the training datasets) using the TFDS APIs, after only a few tasks the reader fails with the following error: [F tensorflow/core/platform/default/env.cc:73] Check failed: ret == 0 (11 vs. 0)Thread tf_data_iterator_resource creation via pthread_create() failed.
This is on Linux with the latest TensorFlow and the latest TensorFlow Datasets frameworks installed.
Is there some limit that needs to be increased to accommodate all the thread usage?
The text was updated successfully, but these errors were encountered:
The TFDS implementation unfortunately creates lots of threads due to there being one dataset per class. I'm not sure what the best solution would be, but I'll look into it and report back.
tensorflow/tensorflow#41532 (comment) suggests that TF may use more than the numbers of available threads, and suggests things to check.
You could try using ulimit -u, as explained here (in another context) to expand that limit if it's the issue.
If that doesn't work, could you share the limits you see?
Unfortunately I'm not aware of a way to ask TF to be more frugal.
When Training on Meta-Dataset episodes (with all the training datasets) using the TFDS APIs, after only a few tasks the reader fails with the following error:
[F tensorflow/core/platform/default/env.cc:73] Check failed: ret == 0 (11 vs. 0)Thread tf_data_iterator_resource creation via pthread_create() failed.
This is on Linux with the latest TensorFlow and the latest TensorFlow Datasets frameworks installed.
Is there some limit that needs to be increased to accommodate all the thread usage?
The text was updated successfully, but these errors were encountered: