Posts: 144
Joined: Aug 2010
Reputation:
2
Right... So, we go from "There is no problem here." "I am going to nitpick an unrelated mistake about hardware models in your post and go on about unrelated stuff to berate you." to "We have been there, done that already." to "Patches are welcome".
Right now my suggestion was "Increase default cache size", but that's apparently not something that's interested in, so a patch won't do much will it. So what should be done if not a default cache size increase?
Posts: 144
Joined: Aug 2010
Reputation:
2
Again, what do you think would fix the problem, if not increasing the default cache size which you've already rejected?
Posts: 11,582
Joined: Feb 2008
Reputation:
84
davilla
Retired-Team-XBMC Developer
Posts: 11,582
Silly rabbit, if I knew what would fix the problem, the problem would be fixed.
AdvancedSettings.xml has setting to alter the cache, it's there because we do not believe it is the proper way to fix this issue, yet it seems to help some users so we expose those settings there. You can also make the value zero and it changes to a file based caching method that will suck down the entire file.
Now if you really want to contribute to solving this issue, then come up with a better way than a hammer approach of increasing the caching size, that just solves the problem by hiding it.
Posts: 144
Joined: Aug 2010
Reputation:
2
Right, I'll try to explain this again.
Cache management strategies don't work when the cache is so small buffer-overflows happen near constantly. You may think of increasing the buffer as a 'hammer to crack a nut' approach, but it isn't. The cache size really has to change when the bitrates of content start to exceed the original assumptions the cache was set against. A 5mb cache as the default is now coming against that limit, and needs to be pushed upwards quite a lot to address newer streamed media bitrates.
While buffer-overflows are not that big a deal in local network environments where the latency is low, and so the response to bandwidth throttling and release is quick, a 1 second content buffer for 5mbps streams appears okay. But when you add in latency that slows the response to throttle and release, so 'overflows' happen more often because getting to the high watermark is basically the same as an overflow when you have to come to an abrupt throttle back so often and so much, and overflows turn into underruns as the starve off isn't rectified before the buffer runs out. This is only fixed by having a larger cache, and I don't know of any way to make a cache that is under one second worth of content work well with high latency flow control.
Also, I did try setting to 0, but it didn't create a benefit. And may actually have made it worse. I don't see why that is, because it should of course be an almost endless cache that never overflows, but the disk cache doesn't seem to work too well.
Posts: 144
Joined: Aug 2010
Reputation:
2
2012-05-08, 01:26
(This post was last modified: 2012-05-08, 02:17 by barberio.)
Okay, here's the maths for why a 5mb cache is too small when you have assumptions including 1080p content streams.
Let's assume you have a 6mbps stream. Now, that 5mb cache, if full, is less than one second of content. But of course, if it fulls it causes a buffer overflow, and the download would have to pause which would certainly cause a buffer underrun because of latency between going from paused to downloading again. So that buffer is actually being kept in a sweet-spot by throttling download over a certain percentage and releasing under a certain percentage.
But, let's say the high/low watermarks are set 30/60. During playback, cache starts over-filling. It goes over 60, and throttling starts. But 30% of 5mb is 1.5mb, and that's 250ms. It's easy to expect latencies of somewhere over 250ms on the internet so let's see what happens if RX throttling doesn't get seen by TX till then. The cache goes back down over 30% in that +250ms due to video playback. And now the system releases the throttle, but at this point the TX is only just responding to your throttle, while your network stack has been discarding bytes from it's queue to satisfy your throttle. So it's over 250ms latency delay to get download speeds back up, so buffer drops due to a deficit... Work it out and you buffer underrun if the latency delay is ever more than 312ms. And random intermittent spikes of latency such as that are all too common on any random internet connection, even high bandwidth ones.
Edit: All of which is wrong due to decimal point placement error.
Again, this is latency, not bandwidth. The TX - RX line can have tons of bandwidth, but have too small a buffer and latency response to throttle/release becomes the issue instead.
Posts: 144
Joined: Aug 2010
Reputation:
2
Oops, made an error there. That should be a 362ms latency, assuming throttle back is to 20% of bitrate which was the step I missed out. Should probably review the maths when it's not past midnight.
I also forgot to note this is latency in response to RX throttling, which is always larger than a simply round trip response latency. Since this includes decisions made in the networking stack and involves sampling over a period, I would expect it could get up to 500-600ms, perhaps longer. Certainly longer on TXs transmitting to many RXs at once.
Posts: 26,215
Joined: Oct 2003
Reputation:
187
How many 50 mbit streams are you transferring over anything other than a local network?
Posts: 144
Joined: Aug 2010
Reputation:
2
2012-05-08, 02:13
(This post was last modified: 2012-05-08, 02:15 by barberio.)
Right... I knew there was something else screwy there, since I was getting substantially lower numbers that I thought could be right. However, even a 4 second latency between RX throttling, and TX responding to the throttle, is *not unusual*. Again, this isn't round trip response latency, but latency in the network stack responding to observed behaviour of a remote computer.
Periods where this latency exceeds the span of the buffer, only have to occur rarely. Say, once or twice every thirty seconds. To make video streaming unwatchable.
Posts: 144
Joined: Aug 2010
Reputation:
2
Let's try the maths again...
2880ms is the important 30% number then from a 5m*byte* cache at 5m*bits*ps. So if the time between a RX initiating throttle/release, and the TX recognising the throttle/release, is ever larger than this then the a 5MB 30/60 caching strategy will fail. And I think that it's quite likely to see occurrences of that more than once or twice.
Posts: 2,752
Joined: Dec 2008
Reputation:
23
bobo1on1
cheapass Team-XBMC Developer
Posts: 2,752
So basically you're saying XBMC needs a bigger cache for high bitrate streams.
Posts: 6,252
Joined: Jun 2009
Reputation:
115
da-anda
Team-Kodi Member
Posts: 6,252
2012-05-08, 11:01
(This post was last modified: 2012-05-08, 11:03 by da-anda.)
I have no technical background in this regard, but would it be possible to detect the bandwidth of a stream at runtime and increase the buffer size on demand? So if there is a high bandwidth stream that would only have ~1 second in the buffer, increase the buffer to at least 2 or 5 seconds if that makes any sense? And on VBR h246 material do a constant peak monitoring and adjust the buffer if a new peak exceeds current min buffer size?