2021-12-08, 04:38
Recently I've been tweaking the cache settings in advancedsettings.xml in the hopes of optimizing buffering behavior, in order to hopefully allow smooth playback of high bitrate HEVC content over SMB v3 (or NFS if needed). While I know that most people tend to give up and just use a wired ethernet connection, I'm trying to understand why that would technically be needed?
My setup consists of an Nvidia Shield TV 2015 running Kodi 19.3 (technically the latest of the stuttering fix builds from here) that attempts to play content from a Windows machine. The test files are stored on an SSD, simply to rule out performance issues on that end. The source has a wired connection to the router so the Shield is the only wireless part of the chain. The file that most consistently fail is a 4K bluray remux with an average bitrate of roughly 55-65 Mbit/s.
Even the highest bitrate blurays should be well within the limits of what a modern Wi-Fi network can deliver under reasonably good conditions, so it can't strictly be the raw throughput that is the limiting factor. My first guess was that it could be due to the less consistent througput performance or the higher latency, which more aggressive buffering might alleviate. While tweaking memorysize and readfactor improved the situation a lot to the point where some of my test content work mostly fine now (was essentially unplayable before), it's still not perfect. Looking at the network usage of my Shield, it's still only slightly faster than the bitrate of the content at best, despite a readfactor (30) that I thought would be enough to effectively make Kodi fill up the buffer at whatever speed the network allows but this does not appear to be the case. It could be that my buffer size of 267 MB (supposedly requires roughly 801 MB ram according to the wiki) is simply too small for me to notice the thoughput peaks when the buffer is refilled, but I'm not sure and I'll need to do more testing after enabling the appropriate debug overlay in order to monitor that.
While digging around in the source code for other hints of how to use these settings, I found that there is also an undocumented chunksize value that can be tweaked. Increasing that from 128 KB to 1 MB seemingly improved SMB v3 performance a bit further, making it surpass NFS (haneWin NFS Server with increased thread count and transfer size) by quite a bit. I still enountered playback problems a few minutes in though.
What should I be tweaking next? Is there a specific bottleneck I haven't thought of?
My advancedsettings.xml:
Some relevant discussion:
https://github.com/xbmc/xbmc/pull/9681
https://github.com/xbmc/xbmc/pull/17042
https://github.com/xbmc/xbmc/issues/16975
https://github.com/xbmc/xbmc/pull/2901
My setup consists of an Nvidia Shield TV 2015 running Kodi 19.3 (technically the latest of the stuttering fix builds from here) that attempts to play content from a Windows machine. The test files are stored on an SSD, simply to rule out performance issues on that end. The source has a wired connection to the router so the Shield is the only wireless part of the chain. The file that most consistently fail is a 4K bluray remux with an average bitrate of roughly 55-65 Mbit/s.
Even the highest bitrate blurays should be well within the limits of what a modern Wi-Fi network can deliver under reasonably good conditions, so it can't strictly be the raw throughput that is the limiting factor. My first guess was that it could be due to the less consistent througput performance or the higher latency, which more aggressive buffering might alleviate. While tweaking memorysize and readfactor improved the situation a lot to the point where some of my test content work mostly fine now (was essentially unplayable before), it's still not perfect. Looking at the network usage of my Shield, it's still only slightly faster than the bitrate of the content at best, despite a readfactor (30) that I thought would be enough to effectively make Kodi fill up the buffer at whatever speed the network allows but this does not appear to be the case. It could be that my buffer size of 267 MB (supposedly requires roughly 801 MB ram according to the wiki) is simply too small for me to notice the thoughput peaks when the buffer is refilled, but I'm not sure and I'll need to do more testing after enabling the appropriate debug overlay in order to monitor that.
While digging around in the source code for other hints of how to use these settings, I found that there is also an undocumented chunksize value that can be tweaked. Increasing that from 128 KB to 1 MB seemingly improved SMB v3 performance a bit further, making it surpass NFS (haneWin NFS Server with increased thread count and transfer size) by quite a bit. I still enountered playback problems a few minutes in though.
What should I be tweaking next? Is there a specific bottleneck I haven't thought of?
My advancedsettings.xml:
xml:
<advancedsettings>
<cache>
<buffermode>4</buffermode>
<memorysize>279969792</memorysize>
<readfactor>30</readfactor>
<chunksize>1048576</chunksize>
</cache>
</advancedsettings>
Some relevant discussion:
https://github.com/xbmc/xbmc/pull/9681
https://github.com/xbmc/xbmc/pull/17042
https://github.com/xbmc/xbmc/issues/16975
https://github.com/xbmc/xbmc/pull/2901