Posts: 26,215
Joined: Oct 2003
Reputation:
187
That's fine then, you can do some tests without messing around too much with git. Let's start by making some changes to xbmc/ThumbLoader.cpp - see CVideoThumbLoader::LoadItem().
Start by commenting out the blocks for resume, fanart, thumbnails, and streamdetails, leaving the block that does FillLibraryArt() - basically all you want in CVideoThumbLoader::LoadItem() is the block that does FillLibraryArt().
See what sort of speedup you get from that and we'll go from there. I'll prepare a patch in the meantime that speeds things up in other ways.
Cheers,
Jonathan
Posts: 26,215
Joined: Oct 2003
Reputation:
187
Thanks - most useful. I'll do up a patch that will take care of resumepoint (which at least atm doesn't apply to tvshows) and also takes care of the constant load/unload of the db.
Posts: 26,215
Joined: Oct 2003
Reputation:
187
Ok, I've pushed a fix to master instead. You'll want 625736ad or higher.
There's some slight room for further speedup, mostly around skipping the items that don't have art available so they don't check for art on disk until all other items are done, but we'll see if that's needed before looking at it (it won't actually speed it up all that much further I suspect).
Lastly, assuming this still isn't good enough, we'll have to denormalise the database. This is something I don't want to do, however, as I'd prefer that major changes to the database occurred in tandem with a rewrite, which I just don't have time for at the moment.
Cheers,
Jonathan
Posts: 26,215
Joined: Oct 2003
Reputation:
187
Much better. I suspect it's now pretty equivalent to what it was before - before it used to stat() (on thread) every bit of art - local would have been pretty quick, but over the network would have been pretty slow. i.e. the art before was incorporated as part of the directory fetch, so the fetch itself would have been slower, but thumbs would have appeared "instantly" after that.
Try commenting out the fetch for fanart+thumbs like you did before (the resume thing has been taken care of) and see if that helps.
With a query per item, there's not a lot we can do at this point to speed things up without denormalising the database, which I really don't want to have to do (but will if it's the only option).
Cheers,
Jonathan
Posts: 26,215
Joined: Oct 2003
Reputation:
187
@pecinko: Resume point is available in the listing (a wee play icon is available next to items in confluence IIRC).
Posts: 26,215
Joined: Oct 2003
Reputation:
187
Thanks for the measurements.
Resume point for tvshows (which I presume is what you're measuring) should not make any difference whatsoever, as the fileid is empty there. All you're saving is a function call and comparison, which should be microseconds if anything.
Fanart is the big one, as for your shows that have no fanart available (local or remote) we hit the disk to look for it. That could potentially be looked for after we've retrieved the information that's already cached (essentially we'd move the FillLibraryArt() block into CVideoThumbLoader::OnLoaderStart()).
And yes, the previous directory fetch time was slower, particularly if you were using pathsubstitution. This applied to the first fetch primarily, as it counted in the "this directory fetch is slow -> cache the results logic, which is now likely no longer being hit". Note that the directory fetch time for other content (movies for example) can be made faster still, as we could remove the streamdetails fetch there (which is a synchronous query per-item), so you may not see quite as much difference there with dir fetch time. One idea to improve things here is to cache the results of the thumbloader thread in OnLoaderFinish() (and load them in OnLoaderStart, if available). IIRC the picture info loader does this.
For 1600 items, though, 1800ms sounds about right: After all, 1600 queries over the network would be expected to be of that order of magnitude I should think. The only way to speed that up is to use a single query in it's place, but that is non-trivial as we have arbitrary amounts of art per item to retrieve, so simply joining the tables will just create multiple rows per item to parse. One possible solution here is to denormalise the art data so that it is available as a single entry per item, with the disadvantage of added complexity of maintaining that data.
Yet another option is to move the additional information handling to the item level rather than having it handled at the window level: i.e. when information is required for a particular listitem, if the info is not available (and hasn't been looked for in the past) we fire off a handler for that. This means that things still won't be instant, ofcourse, but will be pretty close to instant, as the very first time the item appears in the UI the handler will run. The trick here is that the item needs to be aware of its loader function.
Note that there's always a tradeoff: Either you get a slow listing (which could be cached, which could become stale) with all data available immediately, or you get information available very quickly, with additional information not critical to the initial listing available shortly after.
Personally, I think that for the most part it's probably good enough once we add caching on top of it. Once we have caching, we could fill from the cache synchronously, which would essentially give you the best of both worlds: No long waits for initial listings (with art + extras loaded shortly after) and then fast listings after that.
Cheers,
Jonathan
Posts: 253
Joined: Jan 2011
Reputation:
3
2012-05-18, 10:17
(This post was last modified: 2012-05-18, 10:18 by vicbitter.)
Agreed... there is still a minor lag (initially) but it is now within usable limits. I have compared against Eden (with path substitution) and I am experiencing much better performance once the list is fully loaded.
If further optimisation can be done then this would be great and I like the caching option. Happy to help test out any code as I have quite a large library (800+). If it helps, I can compile code, etc so could make/apply any suggested mods locally before they need to be committed to master.
Thanks for this fantastic new functionality!