Help with paginating results & displaying x at a time
#1
Question 
I'm trying to implement pagination for my add-on in the following fashion:
Add-on setting links_per_page is set to 10
1. Add-on scrapes page 1 of source (for example sake, say it returns 15 links)
2. Displays the first 10 links on the first page of the add-on
3. Stores the excess (5) links in the paging.p pickle
4. Adds a Next Page directory item at the bottom of the list of links in kodi
5. User selected the Next Page directory item
6. The excess (5) links from the pickle are added to the beginning of the list for the next page
7. Add-on scrapes page 2 of source (returns 15 links)
8. Displays the excess (5) links, followed by the first 5 links from page 2
9. Store the excess (5) links from source's page 2 in the paging.p pickle
10. Adds a Next Page directory item at the bottom of the list of links in kodi
And continues like that.

I'm able to successfully accomplish the above flow with the following code in most cases.
Code:
def get_listing(self, source, page=1):
        import importlib
        import cPickle
        if '[' in source: #Remove Status Label
            source = source[:source.find('[')]
        source_str = source.lower()
        source = importlib.import_module('.'+source_str, package='resources.lib.sources')
        
        #Stored here: C:\Users\mhill\Desktop\Portable Apps\Kodi Portable\Kodi
        try:
            f = open('paging.p', 'rb')
            paging = cPickle.load(f)
        except:
            f = open('paging.p', 'wb')
            paging = {source_str:{}}
            cPickle.dump(paging, f, -1)
        f.close()
        
        #Issue with current method:
        #If one page has over 2x per_page it causes Next Page option
        #to not be displayed after first time it's selected
        
        per_page = int(data.addon.getSetting('links_per_page'))
        if source_str in paging and 'excess' in paging[source_str]:
            display_these_now = paging[source_str]['excess']
        else:
            display_these_now = []
        i=1
        
        links = source.Site().scrape(page)
        
        while (len(display_these_now) < per_page and len(links) != 0):
            display_these_now = display_these_now + links
            i += 1
            links = source.Site().scrape(int(page)+i)
        if len(display_these_now) > per_page:
            paging[source_str] = {'excess': display_these_now[per_page:]}
            display_these_now = display_these_now[:per_page]
        
        f = open('paging.p', 'wb')
        cPickle.dump(paging, f, -1)
        f.close()
        print list(display_these_now)
        for i, (title, image, link) in enumerate(display_these_now):
            list_item = xbmcgui.ListItem(label=title)
            list_item.setProperty('IsPlayable','true')
            list_item.setArt({'poster': image, 'banner': image})
            url = get_url(action='play', video=link)
            xbmcplugin.addDirectoryItem(data._handle, url, list_item, False)
        if len(links) != 0:
            list_item = xbmcgui.ListItem(label='NEXT PAGE >>>')
            #list_item.setArt({'poster': image, 'banner': image})
            url = get_url(action='listing', source=source_str, page=int(page)+i)
            xbmcplugin.addDirectoryItem(data._handle, url, list_item, True)
        xbmcplugin.addSortMethod(data._handle, xbmcplugin.SORT_METHOD_NONE)
        xbmcplugin.endOfDirectory(data._handle)

The case where this fails to function properly is when a single page returns more than 2x the links_per_page variable (ex: 30 links returned when links_per_page set to 10). In that case I get the Next Page directory item for the first page, but once I click it the second page of results is loaded, but the Next Page directory item isn't placed at the bottom of the directory.

Even in the event that that issue is resolved I'm still faced with the confusing dilemma of keeping track of pages for sources
Example:
Add-on Page 1 (10 links - taken from source)| Source Page 1 (20 links) | Pickle (10 links)
Add-on Page 2 (10 links - taken from pickle) | Source Page 1 | Pickle (back to 0)
Add-on Page 3 (10 links - taken from source)| Source Page 2 (20 links) | Pickle (10 links)
Add-on Page 4 (10 links - taken from pickle) | Source Page 2 | Pickle (back to 0)
and so on

Also, in general, I feel like my paginating implementation is really messy/hack-y. Can anyone show me a better way to implement this sort of feature?
Quote:pro·gram·mer (n): An organism capable of converting caffeine into code.
Reply
#2
Maybe i am missing something, but wouldn't it be easier to just show the scraped items of a page? And add a next item for the following page?
Reply
#3
(2017-07-19, 20:42)Skipmode A1 Wrote: Maybe i am missing something, but wouldn't it be easier to just show the scraped items of a page? And add a next item for the following page?

In this simple example, yes, it would be easier to do it that way. Honestly, with single sources I probably will do it that way. However, I also have sections ("By animal" & "By category") which show results from more than one source in the same directory. In those cases if I were to use that method there'd be over 100+ links per page, which seems a bit much. Those were the situations I was looking to implement this sort of paginating. Sorry, I should've made my example a bit more realistic. Does that give a better explanation of why I want to do that?
Quote:pro·gram·mer (n): An organism capable of converting caffeine into code.
Reply
#4
I've used a python paginate library in my addon with some success.

Then it's just a matter of using the library in your addon:
Code:
import resources.lib.paginate as paginate
...
page = paginate.Page(list_object, page=page_id, items_per_page=items_per_page_setting)
#Define the listitems to display
current_page = page.items
Reply
#5
(2017-07-19, 20:42)Skipmode A1 Wrote: Maybe i am missing something, but wouldn't it be easier to just show the scraped items of a page? And add a next item for the following page?

I agree with you... I handle pagination like this:

Code:
def pagination(self, seq, rowlen):
        for start in xrange(0, len(seq), rowlen):
            yield seq[start:start+rowlen]
Image Lunatixz - Kodi / Beta repository
Image PseudoTV - Forum | Website | Youtube | Help?
Reply
#6
(2017-07-20, 00:45)zachmorris Wrote: I've used a python paginate library in my addon with some success.

Then it's just a matter of using the library in your addon:
Code:
import resources.lib.paginate as paginate
...
page = paginate.Page(list_object, page=page_id, items_per_page=items_per_page_setting)
#Define the listitems to display
current_page = page.items
That seems like a great library, but I don't see how I could implement this for multi-source directory listings since I need to keep track of each source's page.

(2017-07-20, 01:05)Lunatixz Wrote:
(2017-07-19, 20:42)Skipmode A1 Wrote: Maybe i am missing something, but wouldn't it be easier to just show the scraped items of a page? And add a next item for the following page?

I agree with you... I handle pagination like this:

Code:
def pagination(self, seq, rowlen):
        for start in xrange(0, len(seq), rowlen):
            yield seq[start:start+rowlen]
So, you're saying I should just show all of the results of page 1 for all sources that pertain to the section? Doesn't that make for a potentially very large directory listing? I figured to make it easier for users they could set the amount of links per page for quicker skimming and selecting.
Quote:pro·gram·mer (n): An organism capable of converting caffeine into code.
Reply
#7
(2017-07-20, 14:21)CaffeinatedMike Wrote:
(2017-07-20, 00:45)zachmorris Wrote: I've used a python paginate library in my addon with some success.

Then it's just a matter of using the library in your addon:
Code:
import resources.lib.paginate as paginate
...
page = paginate.Page(list_object, page=page_id, items_per_page=items_per_page_setting)
#Define the listitems to display
current_page = page.items
That seems like a great library, but I don't see how I could implement this for multi-source directory listings since I need to keep track of each source's page.

(2017-07-20, 01:05)Lunatixz Wrote:
(2017-07-19, 20:42)Skipmode A1 Wrote: Maybe i am missing something, but wouldn't it be easier to just show the scraped items of a page? And add a next item for the following page?

I agree with you... I handle pagination like this:

Code:
def pagination(self, seq, rowlen):
        for start in xrange(0, len(seq), rowlen):
            yield seq[start:start+rowlen]
So, you're saying I should just show all of the results of page 1 for all sources that pertain to the section? Doesn't that make for a potentially very large directory listing? I figured to make it easier for users they could set the amount of links per page for quicker skimming and selecting.
No, you don't display all content at once.

Assuming you have a list of available links... Chunk it into groups of 24, process 24 xbmc.add directory links and include a 25th element xbmc.adddirectory folder which will be your "Next page".

Sent from my SM-G935T
Image Lunatixz - Kodi / Beta repository
Image PseudoTV - Forum | Website | Youtube | Help?
Reply
#8
(2017-07-20, 18:54)Lunatixz Wrote:
(2017-07-20, 14:21)CaffeinatedMike Wrote:
(2017-07-20, 00:45)zachmorris Wrote: I've used a python paginate library in my addon with some success.

Then it's just a matter of using the library in your addon:
Code:
import resources.lib.paginate as paginate
...
page = paginate.Page(list_object, page=page_id, items_per_page=items_per_page_setting)
#Define the listitems to display
current_page = page.items
That seems like a great library, but I don't see how I could implement this for multi-source directory listings since I need to keep track of each source's page.

(2017-07-20, 01:05)Lunatixz Wrote: I agree with you... I handle pagination like this:

Code:
def pagination(self, seq, rowlen):
        for start in xrange(0, len(seq), rowlen):
            yield seq[start:start+rowlen]
So, you're saying I should just show all of the results of page 1 for all sources that pertain to the section? Doesn't that make for a potentially very large directory listing? I figured to make it easier for users they could set the amount of links per page for quicker skimming and selecting.
No, you don't display all content at once.

Assuming you have a list of available links... Chunk it into groups of 24, process 24 xbmc.add directory links and include a 25th element xbmc.adddirectory folder which will be your "Next page".

Sent from my SM-G935T (typie typie)

I understand that, it's what I was talking about doing. But, by doing that how do I keep track of the actual page of the soruces that I'm scraping? Say we have the following situation:

Page 1 of the addon directory is showing 25 of the 45 links that were scraped from every source's first page. Since there's the excess, we add the Next Page directory item, which brings them to the second page of the directory listings. At the bottom of that directory is yet again a Next Page directory item that needs to scrape all sources' page twos.

TLDR; How do we keep track of what page we're supposed to be scraping opposed to when we're supposed to be simply displaying excess links that were already collected from a previously scraped page?
Quote:pro·gram·mer (n): An organism capable of converting caffeine into code.
Reply
#9
You create a function and/or url parameters to keep track of your position. There are a dozen ways to handle it.

You can directly parse individual urls on call, you could chunk a list and keep track of it's index position...

You'll have to pick a method that works for you.

Sent from my SM-G935T
Image Lunatixz - Kodi / Beta repository
Image PseudoTV - Forum | Website | Youtube | Help?
Reply

Logout Mark Read Team Forum Stats Members Help
Help with paginating results & displaying x at a time0