-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BrokenProcessPool #27
Comments
Same problem here!!do you finded some solution? |
Nope, still crashing on Google colab. |
I've experienced a similar problem with the I discovered that this error is stochastic -- that is, sometimes it doesn't occur! My brute force solution (admittedly sub-optimal) is to run a This is a problem with the Perhaps it is no longer a problem in |
Getting the same error in paperspace free GPU: Quadro M4000 . Both in jupiter-notebook mode and terminal mode, getting the same error! |
Hi!
I have been training a Language Model from Wikipedia in order to create a text classifier in FastAi. I have been using Google colab for it. But after a few minutes of training, the process stops with the following error:
get_wiki(path,lang)
dest = split_wiki(path,lang)
bs=64
data = (TextList.from_folder(dest)
.split_by_rand_pct(0.1, seed=42)
.label_for_lm()
.databunch(bs=bs, num_workers=1))
data.save('tmp_lm')
BrokenProcessPool Traceback (most recent call last)
in ()
1 bs=64
2 data = (TextList.from_folder(dest)
----> 3 .split_by_rand_pct(0.1, seed=42)
4 .label_for_lm()
5 .databunch(bs=bs, num_workers=1))
9 frames
/usr/local/lib/python3.6/dist-packages/fastai/data_block.py in _inner(*args, **kwargs)
478 self.valid = fv(*args, from_item_lists=True, **kwargs)
479 self.class = LabelLists
--> 480 self.process()
481 return self
482 return _inner
/usr/local/lib/python3.6/dist-packages/fastai/data_block.py in process(self)
532 "Process the inner datasets."
533 xp,yp = self.get_processors()
--> 534 for ds,n in zip(self.lists, ['train','valid','test']): ds.process(xp, yp, name=n)
535 #progress_bar clear the outputs so in some case warnings issued during processing disappear.
536 for ds in self.lists:
/usr/local/lib/python3.6/dist-packages/fastai/data_block.py in process(self, xp, yp, name, max_warn_items)
712 p.warns = []
713 self.x,self.y = self.x[~filt],self.y[~filt]
--> 714 self.x.process(xp)
715 return self
716
/usr/local/lib/python3.6/dist-packages/fastai/data_block.py in process(self, processor)
82 if processor is not None: self.processor = processor
83 self.processor = listify(self.processor)
---> 84 for p in self.processor: p.process(self)
85 return self
86
/usr/local/lib/python3.6/dist-packages/fastai/text/data.py in process(self, ds)
295 tokens = []
296 for i in progress_bar(range(0,len(ds),self.chunksize), leave=False):
--> 297 tokens += self.tokenizer.process_all(ds.items[i:i+self.chunksize])
298 ds.items = tokens
299
/usr/local/lib/python3.6/dist-packages/fastai/text/transform.py in process_all(self, texts)
118 if self.n_cpus <= 1: return self._process_all_1(texts)
119 with ProcessPoolExecutor(self.n_cpus) as e:
--> 120 return sum(e.map(self._process_all_1, partition_by_cores(texts, self.n_cpus)), [])
121
122 class Vocab():
/usr/lib/python3.6/concurrent/futures/process.py in _chain_from_iterable_of_lists(iterable)
364 careful not to keep references to yielded objects.
365 """
--> 366 for element in iterable:
367 element.reverse()
368 while element:
/usr/lib/python3.6/concurrent/futures/_base.py in result_iterator()
584 # Careful not to keep a reference to the popped future
585 if timeout is None:
--> 586 yield fs.pop().result()
587 else:
588 yield fs.pop().result(end_time - time.monotonic())
/usr/lib/python3.6/concurrent/futures/_base.py in result(self, timeout)
430 raise CancelledError()
431 elif self._state == FINISHED:
--> 432 return self.__get_result()
433 else:
434 raise TimeoutError()
/usr/lib/python3.6/concurrent/futures/_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
--> 384 raise self._exception
385 else:
386 return self._result
BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
I was trying to solve this by varying the value of bs to 64,32,16. Also changing the value of num_workers but still failing.
The process is as follows, the ram memory begins to fill and at some point stops the execution of the script.
Details of the Google Colab Machine:
GPU Machine, RAM: 25.51 GB, Disk: 358.27 GB.
Is there any chance to run it on that environment?
Best Regards!
The text was updated successfully, but these errors were encountered: