Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This creates a new command to run a load test on the WebRTC livestreaming. The focus is
on testing scalability of WebRTC playback for now, and so ingest is simplified to still be in RTMP.
There is a single binary and container which can be run as different roles:
orchestrator
,streamer
andplayer
. Explaining each of these roles:orchestrator
can run on user machine (later we can set it up on CI) since it just calls APIs, doesn’t run any heavy load. It creates a stream and then starts 1 streamer and N players on Google Cloud Runctrl+c
)-test-id
CLI argument (or in config file) so it picks up on a running test (and deletes resources on exit)streamer
gets an RTMP URL and stream key and ingests to that endpoint. There are a couple files embedded in the docker image and it can also download files if a URL is provided, but by default it uses a nice 1080p@30fps Big Buck Bunny on loop.player
receives a player URL, playback ID (TODO: or playback URL) and opens a headless browser on thelvpr.tv
player for that stream. It uses thelowLatency=force
option to make sure to play only WebRTC.player
opens multiple tabs on the headless browser, so it behaves as multiple simultaneous viewers at once. This is on order to share and save Cloud Run resources with some parallelism.