-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expectations/requirements for VideoFrame and AudioData timestamps #80
Comments
The mediacapture-transform specification does not currently describe how Related: #96 Potential future issue when spatial scalability is supported: With spatial scalability, you can have multiple Currently, we do not configure the operating point in the WebCodecs decoder, so that decoder doesn't know the desired operating point and the layers that the operating point depends on. So at any given |
In this case the decoder would produce multiple VideoFrame's with the same timestamp, but authors would be expected to discard many of these, passing only their desired resolution to MSTG. |
This issue had an associated resolution in WebRTC November 19 2024 meeting – 19 November 2024 (Issue #80: Expectations/Requirements for VideoFrame and AudioData timestamps):
|
Is it valid to append mutliple VideoFrames or AudioData objects with the same timestamp (e.g. timestamp = 0) to a MediaStreamTrack? If so, what is the behavior? Does the spec describe this?
The text was updated successfully, but these errors were encountered: