Improving movie db update performance.
#46
Would an extra JSON API help? Either a start/end update method that would bookend a series of updates, or extra attribute on the SetFooDetails method? Or is there any way to defer the GUI update a few seconds in case another JSON update comes in (or just defer updates until the current JSON session is disconnected?)
Texture Cache Maintenance Utility: Preload your texture cache for optimal UI performance. Remotely manage media libraries. Purge unused artwork to free up space. Find missing media. Configurable QA check to highlight metadata issues. Aid in diagnosis of library and cache related problems.
Reply
#47
There was a discussion about that is an earlier post about improving performance and it was discarded Wink

But an API overlay to allow transaction type behavior would solve not only this use case but also the problems about hundreds of notifications send when added tons of items to playlists for example Wink
Reply
#48
Maybe buffer the notifications?

So add them to a buffer and if no more are added after X seconds execute the notification.
Read/follow the forum rules.
For troubleshooting and bug reporting, read this first
Interested in seeing some YouTube videos about Kodi? Go here and subscribe
Reply
#49
Certainly I think there are better solutions to the problem. However, they probably need those with more knowledge on how some of those areas work Smile

In the mean time I've pushed another commit that profiling suggests i part of the problem. It seems the FileItem::IsSamePath code was doing an expensive check (build a curl object and check if the uri is video or audio), before doing a cheap check (something is != null).

I've flipped them over, and now I see no lag at all after pushing 400 movie updates through.

MillhouseVH, if you get time you could grab that commit and see if it makes your PI happier?
Reply
#50
(2014-03-02, 17:49)cg110 Wrote: MillhouseVH, if you get time you could grab that commit and see if it makes your PI happier?

I re-ran my test with the previous patches (not including the latest "flipped test" patch) and the GUI is actually unresponsive for longer than I claimed - it's more like 80 seconds after the final update. While updates are being processed the GUI is almost completely unresponsive (one GUI update occurring every 7-8 seconds) until eventually the GUI stops responding completely, coming back to life 80 seconds after the last JSON/SQL update.

When including your most recent "flipped test" patch, the GUI is still pretty much unresponsive during updates, however it now only remains unresponsive after the last update for a maximum of 25 seconds, so there is some improvement here.

Just to give you an idea of what happens when there are no SQL updates being applied (because they've been optimised away), within the Movie library view the GUI remains responsive and the CPU is not maxed out (peaking at about 60% - bear in mind a Pi with Confluence idles at about 20% CPU load) so it's not the fundamental JSON/SQL processing that is causing the GUI to become unresponsive when JSON updates are being processed. Indeed, even when SQL updates are being applied, and the user is at the Home menu, it's possible to navigate briskly across the Home menu options without any lag (peak 85% CPU).
Texture Cache Maintenance Utility: Preload your texture cache for optimal UI performance. Remotely manage media libraries. Purge unused artwork to free up space. Find missing media. Configurable QA check to highlight metadata issues. Aid in diagnosis of library and cache related problems.
Reply
#51
More digging and it appears that the issue is related to a list kept of all the movies when in the movie view. That list is walked for each updated and each item compared with the "updated" item.

That comparison isn't cheap (as it uses IsSamePath, and ends up doing the curl thing on one of the items)

I've added another optimization to IsSamePath where if the items being compared both contain VideoInfoTags (which they generally do) then just compare the movie and file ids to determine if it's the same path (as it should be, even if one is actually in disk path format (nfs or smb), the other in videodb format)

Be interesting to see if that helps.
Reply
#52
@cg110 - with that last patch, I'd say you've nailed it! The GUI remains entirely responsive during an update (ie. when SQL updates are actually being applied) - great job! Smile
Texture Cache Maintenance Utility: Preload your texture cache for optimal UI performance. Remotely manage media libraries. Purge unused artwork to free up space. Find missing media. Configurable QA check to highlight metadata issues. Aid in diagnosis of library and cache related problems.
Reply
#53
Well I think I've understood the issue around the is test some more.

I've added another commit that switches the FileItem tests for IsVideoDB and IsMusicDb to the URIUtils versions (which matches what the rest of the Is tests are in FileItem). So now it's a simpler string compare.

That removes the parsing of the URL into a URL object just to compare the protocol. Part of me wonders if changing that validation is a good/bad thing.

However, given the function name sounds like it shouldn't be much more than an simple test, it seems reasonable to expect it to be cheap to run.

Not sure you'll notice it being snappier any where else with this change, but I am wondering if the change to IsSamePath is actually needed, it's still cheaper to compare ids, but it does add more complexity to code.

Hmm, I wonder if I should be doing these other changes on another Pull Request, as they're fixes for issues that the db changes uncovered by hitting the other code that much quicker.

However, I've no idea how I'd untangle them I guess another branch from head, cherry pick them and revert them on the db change branch.

(then I'd need another own branch with all these PRs in to test out....)
Reply
#54
The usual method is as you propose: separate branches for specific issues, with a merged up branch for you to test with.
Always read the XBMC online-manual, FAQ and search the forum before posting.
Do not e-mail XBMC-Team members directly asking for support. Read/follow the forum rules.
For troubleshooting and bug reporting please make sure you read this first.


Image
Reply
#55
As an end-user I want to say thank you to cg110 for this achievement. This is a massive usability improvement for xbmc. I really look forward to it making its way to mainline, and back down into Openelec, etc..
Reply
#56
FYI, I've now split the IsSamePath work into a separate PR:
https://github.com/xbmc/xbmc/pull/4345

(Note it's not quite the same as the previous changes, but I believe the performance should be very similar)

and reverted those changes from the main PR:
https://github.com/xbmc/xbmc/pull/4301

I did try and squash the reverted changes, but my git{hub} knowledge couldn't work out the black runes to make it happen. I did wonder about doing a rollback, or a forced push, but I started to worry it would mess up the Pull Request system...
Reply
#57
(2014-03-07, 01:17)cg110 Wrote: I did try and squash the reverted changes, but my git{hub} knowledge couldn't work out the black runes to make it happen. I did wonder about doing a rollback, or a forced push, but I started to worry it would mess up the Pull Request system...

Code:
git rebase -i HEAD~17
follow the instructions. You can just delete the commit and the reverted commit. You can also move/squash commits, and edit them.
When you are done:
Code:
git push -f

Yes, forced pushes are fine for pull requests. The PR just updates.
Reply
#58
Ok, thanks for the guidance. I think that's worked, it's removed the additions and reversion for the code now in PR 4345. (although I did manage to get it in a right mess, but managed to untangle said mess)

I do wonder if there's a neater way as I didn't include one minor optimization in the above PR, which has now been lost to history, wouldn't be hard to redo. I guess I dislike seeing code deleted/thrown away that might still have a use, but it does make both PRs tidier without it.
Reply
#59
(2014-03-07, 01:46)cg110 Wrote: Ok, thanks for the guidance. I think that's worked, it's removed the additions and reversion for the code now in PR 4345. (although I did manage to get it in a right mess, but managed to untangle said mess)

I do wonder if there's a neater way as I didn't include one minor optimization in the above PR, which has now been lost to history, wouldn't be hard to redo. I guess I dislike seeing code deleted/thrown away that might still have a use, but it does make both PRs tidier without it.

Normally best to work on one branch and PR from a separate branch (i.e. branch or cherry-pick from working branch to PR branch).
Then you can squash and force push the PR branch without losing anything.

Technically you've probably not lost anything. Git only deletes things when garbage collecting. You only lost the hash to access them, but "git reflog" allows you find old hashes.
Reply
#60
(2014-03-07, 01:53)popcornmix Wrote: Normally best to work on one branch and PR from a separate branch (i.e. branch or cherry-pick from working branch to PR branch).
Then you can squash and force push the PR branch without losing anything.

I am starting to think that might be a better way to work. Have a "develop" branch, cherry-pick things from it to PR branches.

Just ends up a little messy trying to make sure they all build (given I've the strange need to keep adding the DXSDK back into the visual studio project, I suspect VC 2013 changed paths, as DXSDK changed with the latest windows sdk...)
Reply

Logout Mark Read Team Forum Stats Members Help
Improving movie db update performance.0