You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In order to minimize the overhead of per-segment transcode sessions, we need to keep the transcode session persistent across segments. This involves a major reworking of the Cgo transcoder internals from being "one-shot" to supporting resumable, per-segment transcoding.
The general behavior of the LPMS transcoding API will remain as-is, taking an input and synchronously returning multiple outputs. However, the input will also include a context pointer for the transcode loop. Under the hood, the API will dispatch the input to the transcode loop, and wait until results are complete, before returning to the caller.
There will also be a new API function introduced to stop the transcode loop.
Modified LPMS Golang API
Probably similar as Cgo in usage (see following section on Cgo), although the function signatures should be a bit friendlier. The signature is derived from livepeer/lpms#124
typeTranscodeContext*C.transcode_contexttypeTranscodeResultsstruct {
...other data ...
// hidden field for the transcoder.TranscodeCtxTranscodeContext
}
funcTranscode3(input*TranscodeOptionsIn, ps []TranscodeOptions) (res[] TranscodeResults, errerror)
funcTranscodeStop(ctxTranscode
Modified Cgo API:
Invoke lpms_transcode for the first time, receive a transcode context handle. Use this handle to continue transcoding with the same session. Invoke lpms_transcode_stop with the handle to halt the session.
// Same as existing `lpms_transcode` with the addition of a `output_results` struct.// See https://github.com/livepeer/lpms/issues/124#issuecomment-502888723 for detailsintlpms_transcode(input_params*inp, output_params*params, output_results*res, intnb_outputs);
// API to terminate the transcode loop.intlpms_transcode_stop(transcode_context*ctx)
typedefstruct {
... existingfields ...
// Pointer to a persistent transcode context.// If NULL, a new context is allocated.transcode_context*ctx;
} input_paramstypedefstruct {
// Number of encoded pixels. (Payment accounting)int64_tpixels;
// Pointer to the persistent transcode context// Use within `input_params` to continue the loop// Must be released via`lpms_transcode_stop`// This is a little redundant since we're returning a ctx// per transcoded rendition, but don't think it's a big deal...transcode_context*ctx;
} output_results;
Unknowns
Check NVENC state reset, and ensure it works for our needs
x264 does not appear flushable. Check cost to set up new x264 session per segment
Determine whether we can recreate the muxer per segment
Code Changes
Separate IO (AVIOContext*) from demuxer within input_ctx
New muxer per stream (?)
Define "close loop" API - lpms_transcode_stop
Separate transcode loop thread from Transcode function entry point
Separate out initialization and teardown routines as necessary
Return thread handle or context pointer from Transcode function entry point
Implement older Transcode APIs in terms of newer API to facilitate testing
To Test For
Preroll audio handling. Suspect we still need to drop the very first audio frame but subsequent ones should be OK since the encoder is persistent. Changes to existing preroll handling should be minimal (if anything at all), but it's something that still needs to be checked.
Ensure we can still correctly handle discontinuous and out-of-order segments for the same stream. Might require manually resetting some pts-related state.
Memory issues via Valgrind / asan
Memory usage with many idle transcode sessions, especially on GPUs
Concurrency issues with ThreadSanitizer
Go-Livepeer Client Integration
The integration between the goclient and the new LPMS API can be detailed later, but here is a sketch of some possibilities.
We might want to make usage more straightforward on the go-livepeer side, such as automatically closing transcoders on a timer after some period of inactivity, or upon transcode-loop expiration. This can be implemented within transcoder.go on the goclient side, as it seems preferable to keep transcoder usage explicit within LPMS.
For cases where the GPU is used, we might want to track previously assigned GPUs, determine their state (busy/idle), and start a new transcode session on an idle GPU if necessary. Then subsequent segments have two local GPUs to choose from. Need to determine the RAM implications of this.
The text was updated successfully, but these errors were encountered:
This issue tracks the tasks needed for livepeer/lpms#119
In order to minimize the overhead of per-segment transcode sessions, we need to keep the transcode session persistent across segments. This involves a major reworking of the Cgo transcoder internals from being "one-shot" to supporting resumable, per-segment transcoding.
The general behavior of the LPMS transcoding API will remain as-is, taking an input and synchronously returning multiple outputs. However, the input will also include a context pointer for the transcode loop. Under the hood, the API will dispatch the input to the transcode loop, and wait until results are complete, before returning to the caller.
There will also be a new API function introduced to stop the transcode loop.
Modified LPMS Golang API
Probably similar as Cgo in usage (see following section on Cgo), although the function signatures should be a bit friendlier. The signature is derived from livepeer/lpms#124
Modified Cgo API:
Invoke
lpms_transcode
for the first time, receive a transcode context handle. Use this handle to continue transcoding with the same session. Invokelpms_transcode_stop
with the handle to halt the session.Unknowns
Code Changes
input_ctx
lpms_transcode_stop
Transcode
function entry pointTranscode
function entry pointTranscode
APIs in terms of newer API to facilitate testingTo Test For
Go-Livepeer Client Integration
The integration between the goclient and the new LPMS API can be detailed later, but here is a sketch of some possibilities.
We might want to make usage more straightforward on the go-livepeer side, such as automatically closing transcoders on a timer after some period of inactivity, or upon transcode-loop expiration. This can be implemented within
transcoder.go
on the goclient side, as it seems preferable to keep transcoder usage explicit within LPMS.For cases where the GPU is used, we might want to track previously assigned GPUs, determine their state (busy/idle), and start a new transcode session on an idle GPU if necessary. Then subsequent segments have two local GPUs to choose from. Need to determine the RAM implications of this.
The text was updated successfully, but these errors were encountered: