"Trickle Caching" for HD Content Streaming
#1
These days, it's no longer unusual to have 1080P content streamed over the internet, and there are now multiple XBMC plugins that access this content. There is however a problems, in that these streams saturate memory caches. The reason that's a problem, is when the cache is full you can't download anymore and downloading is paused. The reason that's a problem is that there's an latency involved from taking a paused download back to full use of available bandwidth.

This is the one of the reasons people see problems with 1080P streamed content in XBMC despite having internet connections that should handle it. Because XBMC grabs as much as it can right away, it fills the cache up, then stops downloading for a bit, then tries to download again when the cache dips down. If the latency between paused download and full use of bandwidth is too great, then a buffer underrun will occur. This is what causes the skipping in and out of apparently speedy buffering.

This is basically a "Greedy Scheduling" problem. Ideally, you want to always be downloading, so there's no gap between paused download and best use of bandwidth. But if you take everything as fast as it can arrive, then you'll fill the buffer, causing a buffer overrun forcing you to pause downloading. So you're alternating between buffer overruns and buffer underruns.

The way to avoid this is actually pretty simple. Just don't download it all at the highest speed possible, and instead always have a trickle of content coming in to try and constantly be filling the buffer. The classical explanation is filling a bucket with a hole, you don't want to have the tap on full and over flow the bucket, and you don't want to keep turning it on and off because then the bucket will run dry.

The network stack actually does have a lot of clever ways to throttle back downloads, and reduce your bandwidth being used. And the best part is that most stacks do this without having to make any special system calls at all. Just limit how many read() calls are made in any period of time, and the network stack will try to throttle the download to the same rate.

So by rate limiting the number of read() calls, you can keep the buffer in check. Work out a 'sweet spot' in the buffer. Over that, start decreasing the number of read() calls. Under it, start increasing the number of read() calls. Theoretically, you can avoid ever overrunning, and if enough bandwidth is there you should also prevent underruns.


Messages In This Thread
"Trickle Caching" for HD Content Streaming - by barberio - 2012-05-07, 20:37
Logout Mark Read Team Forum Stats Members Help
"Trickle Caching" for HD Content Streaming0