Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crash during channel fetch while playback - java.lang.outOfMemory #10496

Open
6 tasks done
Feuerswut opened this issue Oct 16, 2023 · 4 comments
Open
6 tasks done

Crash during channel fetch while playback - java.lang.outOfMemory #10496

Feuerswut opened this issue Oct 16, 2023 · 4 comments
Labels
bug Issue is related to a bug feed Issue is related to the feed

Comments

@Feuerswut
Copy link

Feuerswut commented Oct 16, 2023

Checklist

  • I am able to reproduce the bug with the latest version given here: CLICK THIS LINK.
  • I made sure that there are no existing issues - open or closed - which I could contribute my information to.
  • I have read the FAQ and my problem isn't listed.
  • I have taken the time to fill in all the required details. I understand that the bug report will be dismissed otherwise.
  • This issue contains only one bug.
  • I have read and understood the contribution guidelines.

Affected versions

0.25.x (possibly well earlier 0.2x.x) - nightly 0.26

Steps to reproduce the bug (option 1)

  1. Open NewPipe; You need to have over 100 channels subscribed, maybe some on other than YouTube platforms
  2. IMPORTANT: You need to have some time passed after you last fetched your subscriptions. Maybe less important: You also need a watch history of well over 1MB or 10.000 watch history entries.
  3. Play back a video, and keep it playing
  4. Minimize player and go to "News"/Subscription Tab
  5. drag to refresh (using the slow/complete fetch method)
  6. The app will fetch about 80% of the first 100+ subscriptions, gradually getting slower, the player starts lagging, eventually everything freezes.
  7. The app will suddenly resume playing and at the same time you will get the crash/error/guru dialog and the player stops again.
  8. If you refetch the rest of subscriptions after reopening, the app will continue only with the last 20% of elements.
  9. Some memory corruption occurs. If you recently 'unsubscribed' from a channel, the sub may reappear, or the other way around. Sometimes a number of subscriptions just go missing and never come back regardless of circumstance.
  10. The error also occurs if the app just happens to start fetching subs because you have notifications for them enabled.

Steps to reproduce the bug (option 2)

  1. Have a video playing for a while; crash becomes more likely with time ("feels random")
  2. Have more concurrent threads acting taking up mem. -> less good GC -> crash more likely
  3. Sometimes when you perform UI actions during a crash, the stacktrace component throws yet another crash, so you just end up with a outOfMem and no stacktrace. Try collapsing and expanding the miniplayer while the app is freezing up and this will likely happen.

Expected behavior

No crash.

Actual behavior

Crash. (Sometimes. Sometimes not...)

Screenshots/Screen recordings

Its just a java.lang.OutOfMemory error. Nothing specific.

Logs

Exception

  • User Action: ui error
  • Request: ACRA report
  • Content Country: DE
  • Content Language: de-DE
  • App Language: de_DE
  • Service: none
  • Version: 0.25.2
  • OS: Linux Android 14 - 34
Crash log

java.lang.OutOfMemoryError: Failed to allocate a 8208 byte allocation with 270360 free bytes and 264KB until OOM, target footprint 268435456, growth limit 268435456; giving up on allocation because <1% of heap free after GC.
	at okio.Segment.<init>(Segment.kt:61)
	at okio.SegmentPool.take(SegmentPool.kt:90)
	at okio.Buffer.writableSegment$okio(Buffer.kt:589)
	at okio.InputStreamSource.read(JvmOkio.kt:91)
	at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:128)
	at okio.RealBufferedSource.read(RealBufferedSource.kt:42)
	at okhttp3.internal.http2.Http2Stream$FramingSource.receive$okhttp(Http2Stream.kt:445)
	at okhttp3.internal.http2.Http2Stream.receiveData(Http2Stream.kt:276)
	at okhttp3.internal.http2.Http2Connection$ReaderRunnable.data(Http2Connection.kt:650)
	at okhttp3.internal.http2.Http2Reader.readData(Http2Reader.kt:180)
	at okhttp3.internal.http2.Http2Reader.nextFrame(Http2Reader.kt:119)
	at okhttp3.internal.http2.Http2Connection$ReaderRunnable.invoke(Http2Connection.kt:618)
	at okhttp3.internal.http2.Http2Connection$ReaderRunnable.invoke(Http2Connection.kt:609)
	at okhttp3.internal.concurrent.TaskQueue$execute$1.runOnce(TaskQueue.kt:98)
	at okhttp3.internal.concurrent.TaskRunner.runTask(TaskRunner.kt:116)
	at okhttp3.internal.concurrent.TaskRunner.access$runTask(TaskRunner.kt:42)
	at okhttp3.internal.concurrent.TaskRunner$runnable$1.run(TaskRunner.kt:65)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:644)
	at java.lang.Thread.run(Thread.java:1012)


Affected Android/Custom ROM version

Android 13, Graphene 13, Graphene 14

Affected device model

Google Pixel 6

Additional information

May occur more frequently in bad network contitions. I have ze German Neuländ internetz and it is 🐢.

Remedial options

  • What we need (I think) is a performance analysis of app components during debug (e.g. long-lasing, it takes network errors and such into account) tests, gh-actions?) and more importanly during production/runtime. Tools so that we (amateur users) can more thoroughly test releases. Maybe a how-to canary section in some FAQ, maybe more.

  • Assign more memory

  • Somehow improve GC performance (more tech-heavy)

  • rewrite code, specifically badly performing components of the app in memory-safe/performant languages. This is a very intense option; more like a last resort, as it makes app distribution more difficult.

@Feuerswut Feuerswut added bug Issue is related to a bug needs triage Issue is not yet ready for PR authors to take up labels Oct 16, 2023
@Feuerswut
Copy link
Author

Maybe related issues: #9349, #9031 (unlikely), ...

@opusforlife2
Copy link
Collaborator

@AudricV How efficient is the feed update process? Is memory allocated channel by channel or in larger chunks? If this is dependent on the number of channels, then maybe the app isn't freeing memory fast enough?

@Feuerswut
Copy link
Author

Feuerswut commented Oct 19, 2023

@AudricV How efficient is the feed update process? Is memory allocated channel by channel or in larger chunks? If this is dependent on the number of channels, then maybe the app isn't freeing memory fast enough?

Tbh, maybe allocate more memory as a temporary bodge to this and some related issues and concentrate on the upcoming refractoring effort bearing this in mind?

This bug is a hassle, but it does not break the app completely.

[...]

The bug is also very inconsistent. Sometimes you dont evem have to play a video for this to happen.



This one is also nice:

Exception

  • User Action: ui error
  • Request: ACRA report
  • Content Country: DE
  • Content Language: de-DE
  • App Language: de_DE
  • Service: none
  • Version: 0.25.2
  • OS: Linux Android 14 - 34
Crash log

java.lang.OutOfMemoryError: OutOfMemoryError thrown while trying to throw an exception; no stack trace available


@opusforlife2 opusforlife2 added feed Issue is related to the feed and removed needs triage Issue is not yet ready for PR authors to take up labels Oct 22, 2023
@Andmyaxe90
Copy link

Still getting these crashes in 0.26.0 Seem be happen more often on long archived livestreams.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Issue is related to a bug feed Issue is related to the feed
Projects
None yet
Development

No branches or pull requests

3 participants