We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I have discovered, that memory is leaking by using the model/decoder.
Load model, use it, unload -> the memory is released partially.
def runner_model(): mem_usage = memory_profiler.memory_usage(-1, interval=.1, timeout=.5) print('> %s > mem=%s' % ('load model', max(mem_usage))) # model = KaldiNNet3OnlineModel( MODELDIR, acoustic_scale=1.0, beam=7.0, frame_subsampling_factor=3, ) # mem_usage = memory_profiler.memory_usage(-1, interval=.1, timeout=.5) print('> %s > mem=%s' % ('model loaded', max(mem_usage))) try: for i in range(3): mem_usage = memory_profiler.memory_usage(-1, interval=.1, timeout=.5) print('> %s > mem=%s' % (' ---> MODEL iteration #%s' % i, max(mem_usage))) # # do something # mem_usage = memory_profiler.memory_usage(-1, interval=.1, timeout=.5) print('> %s > mem=%s' % (' ###> MODEL iteration #%s' % i, max(mem_usage))) finally: mem_usage = memory_profiler.memory_usage(-1, interval=.1, timeout=.5) print() print('> %s > mem=%s' % ('delete model', max(mem_usage))) # del model # mem_usage = memory_profiler.memory_usage(-1, interval=.1, timeout=.5) print() print('> %s > mem=%s' % ('model deleted', max(mem_usage))) def task(): mem_usage = memory_profiler.memory_usage(-1, interval=.1, timeout=.5) print('> %s > mem=%s' % ('start task', max(mem_usage))) # runner_model() # mem_usage = memory_profiler.memory_usage(-1, interval=.1, timeout=.5) print('> %s > mem=%s' % ('finish task', max(mem_usage)))
the output:
> start task > mem=45.46875 > load model > mem=45.46875 > model loaded > mem=356.10546875 > ---> MODEL iteration #0 > mem=356.10546875 > ###> MODEL iteration #0 > mem=356.10546875 > ---> MODEL iteration #1 > mem=356.10546875 > ###> MODEL iteration #1 > mem=356.10546875 > ---> MODEL iteration #2 > mem=356.10546875 > ###> MODEL iteration #2 > mem=356.10546875 > delete model > mem=356.10546875 > model deleted > mem=255.890625 > finish task > mem=255.890625
I see that 200MB are not released back - cf. > start task > mem=45.46875 and > finish task > mem=255.890625
200MB
> start task > mem=45.46875
> finish task > mem=255.890625
The text was updated successfully, but these errors were encountered:
No branches or pull requests
I have discovered, that memory is leaking by using the model/decoder.
Scenario
Load model, use it, unload -> the memory is released partially.
the output:
I see that
200MB
are not released back - cf.> start task > mem=45.46875
and> finish task > mem=255.890625
The text was updated successfully, but these errors were encountered: