How does that work? Let’s say a stream is running with a 5s latency with the streamer and a bunch of viewers, all located in the US on fast connections. Now someone from rural Australia connects to the stream on a slow connection. Do you pause the stream for everyone in order to build a buffer for that new user and allow them to catch up and synchronize? What does the latency look like after that? 15-20s for everyone? That sounds like a terrible experience! When users find out what’s going on they’re going to demand the ability to block these slow/distant users from the stream so that everyone else doesn’t have their performance degraded.
It also doesn’t solve all of the interactivity issues. If a streamer is directly chatting with their viewers, a 15s latency is huge! It effectively makes normal back-and-forth conversations impossible and therefore leads to disengagement.
Why is this such a problem? Look at the relationship between engagement and monetization. Highly engaged viewers give way more money to streamers than do passive viewers. Highly engaged viewers are simply way more valuable as an audience, so their experience is given top priority. This is why best-effort low latency without synchronization makes sense. Lowest-common-denominator synchronized video sounds much more difficult to achieve if these “power viewers” are your priority audience.
> How does that work? Let’s say a stream is running with a 5s latency with the streamer and a bunch of viewers, all located in the US on fast connections. Now someone from rural Australia connects to the stream on a slow connection.
It works exactly as for non-p2p streaming, as the playback synchronization is independent from the p2p part and entirely managed by the video player through its buffer management system. Basically you get the “latest block” from the stream's manifest (that always comes from the CDN) and then the player is playing a few blocks behind and keeping its buffer full.
> What does the latency look like after that? 15-20s for everyone?
Yes, but that's already the case for most live streaming website you can find unless they specifically aim for low-latency (which comes with a QoS drawback since smaller buffer means buffering is more likely for the end user if the connection isn't flawless, so going to low-latency is a trade-off) and it was the case on Twitch itself before 2019.
> It also doesn’t solve all of the interactivity issues. If a streamer is directly chatting with their viewers, a 15s latency is huge! It effectively makes normal back-and-forth conversations impossible and therefore leads to disengagement.
It's less convenient, but at the same time Twitch has worked like that for almost a decade so it's not a complete showstopper. And again, 15s isn't the minimum you can reach, it's going below 5s that's not necessarily feasible (also, when I left my former employer, 5s was the limit but people were working on that so maybe they managed to reach 3s as they were trying to).
> Highly engaged viewers are simply way more valuable as an audience, so their experience is given top priority. This is why best-effort low latency without synchronization makes sense.
Maybe, but how many times more valuable? Has the per user value increase by a factor 20 since 2019? If no, maybe having reduced the bandwidth cost would have made more sense that trying to improve engagement.
It works exactly as for non-p2p streaming, as the playback synchronization is independent from the p2p part and entirely managed by the video player through its buffer management system. Basically you get the “latest block” from the stream's manifest (that always comes from the CDN) and then the player is playing a few blocks behind and keeping its buffer full.
Those blocks still need to be sent to the new user's entry point to the CDN. I doubt you're going to save any money by sending every single video stream to every single node in the CDN. You're going to send the video only to nodes that have connected viewers of that particular stream. Then you have the issue of pausing everyone else’s feed until the new user catches up.
Yes, but that's already the case for most live streaming website you can find unless they specifically aim for low-latency (which comes with a QoS drawback since smaller buffer means buffering is more likely for the end user if the connection isn't flawless, so going to low-latency is a trade-off) and it was the case on Twitch itself before 2019.
I think Twitch has moved on from that. Users' expectations have gone way up. They want more interaction with streamers, not less. They aren't going to tolerate a backslide to higher latencies if the only purpose is to save Twitch money. Competition from other platforms, especially YouTube, is fierce.
High latency (>5s) leads to this shallow form of engagement where viewers send one-way comments to the streamer and then wait (and hope) they hear back. Low latency promotes live conversations. This is especially critical on small streams where the number of people in the chat doesn't overwhelm the streamer. And as I said before, smaller streams are less likely to have streamers who lean into heavy third-party monetization strategies. These streamers often develop closer relationships with their viewers and have a higher percentage of subscribed viewers.
Maybe, but how many times more valuable? Has the per user value increase by a factor 20 since 2019? If no, maybe having reduced the bandwidth cost would have made more sense that trying to improve engagement.
I think they are trying to reduce bandwidth costs, but not uniformly across users. My theory is that they are trying to discourage casual viewers who don't subscribe. Getting rid of these "freeloader users" would naturally cause per-user value to go up!
> doubt you're going to save any money by sending every single video stream to every single node in the CDN
What? The meat of the cost isn't when sending videos segments to the CDN, it's what the CDN charges you for sending those segments thousands of time.
> Then you have the issue of pausing everyone else’s feed until the new user catches up.
No, if the p2p system isn't working well, it falls back to the CDN transparently for the users. And again it's not something theoretical, we ran this in production for up to several hundred thousands viewers, and it was working great.
> I think Twitch has moved on from that. Users' expectations have gone way up. They want more interaction with streamers, not less. They aren't going to tolerate a backslide to higher latencies if the only purpose is to save Twitch money.
Maybe, people getting used to comfort and never wanting to go back is a real phenomenon. But Twitch had many years between when I worked in the field and 2019 to deploy such a system without degrading the experience and to me it's pretty disappointing that they didn't.
It also doesn’t solve all of the interactivity issues. If a streamer is directly chatting with their viewers, a 15s latency is huge! It effectively makes normal back-and-forth conversations impossible and therefore leads to disengagement.
Why is this such a problem? Look at the relationship between engagement and monetization. Highly engaged viewers give way more money to streamers than do passive viewers. Highly engaged viewers are simply way more valuable as an audience, so their experience is given top priority. This is why best-effort low latency without synchronization makes sense. Lowest-common-denominator synchronized video sounds much more difficult to achieve if these “power viewers” are your priority audience.