Proposal - request/urllib wrapper.
#1
Recently the discussion was brought up concerning the burden meta apis take from Kodi instances... I see from the Devcon notes this was also a topic.

Giving the situation some thought, I was thinking about working on a wrapper to replace python's request andor urllib libraries. Something addressable though Kodis api akin to the xbmcvfs module.

Requests can then be cached by url to Kodis database schema, similar to how the thumbnail cache works... Is this feasible solution? Team feedback would be appreciated.
Image Lunatixz - Kodi / Beta repository
Image PseudoTV - Forum | Website | Youtube | Help?
Reply
#2
I'm not a C++ developer (although I know basics) and more a Python guy, but why do we need this? I agree that urllib/urlib2 are not very convenient but requests has very descent and Pythonic API. I guess you can expose curl to Python or even write your own implementation, for example with boost::asio, but what is the point? I'm not criticizing your idea, I'm just trying to understand the rationale behind it.
Reply
#3
(2017-05-09, 12:10)Roman_V_M Wrote: I'm not a C++ developer (although I know basics) and more a Python guy, but why do we need this? I agree that urllib/urlib2 are not very convenient but requests has very descent and Pythonic API. I guess you can expose curl to Python or even write your own implementation, for example with boost::asio, but what is the point? I'm not criticizing your idea, I'm just trying to understand the rationale behind it.

Sole reason is to provide a universal cache to offload the burden to apis like Trakt and Fanart.tv.

Perhaps its not the correct approach? As it is now individual plugins that access apis are responsible for cache management. Since its not standardized you'll have two plugins potentially accessing the same url and storing identical data to their respective cache.
Image Lunatixz - Kodi / Beta repository
Image PseudoTV - Forum | Website | Youtube | Help?
Reply
#4
(2017-05-09, 14:46)Lunatixz Wrote:
(2017-05-09, 12:10)Roman_V_M Wrote: I'm not a C++ developer (although I know basics) and more a Python guy, but why do we need this? I agree that urllib/urlib2 are not very convenient but requests has very descent and Pythonic API. I guess you can expose curl to Python or even write your own implementation, for example with boost::asio, but what is the point? I'm not criticizing your idea, I'm just trying to understand the rationale behind it.

Sole reason is to provide a universal cache to offload the burden to apis like Trakt and Fanart.tv.

Perhaps its not the correct approach? As it is now individual plugins that access apis are responsible for cache management. Since its not standardized you'll have two plugins potentially accessing the same url and storing identical data to their respective cache.

I see your point. However, quick googling brought me this: https://github.com/reclosedev/requests-cache. So maybe no need to invent the wheel and use existing tools? Caches can be stored in Kodi's temp folder. All we need is a common naming convention for them, for example tvdb.sqlite, tmdb.sqlite and such. Something like this (I skipped the authentication part):

Code:
import os
import xbmc
from requests_cache import CachedSession

tempdir = xbmc.translatePath('special://temp').decode('utf-8')
session = CachedSession(os.path.join(tempdir, 'tvdb'))
result = session.get('https://api.thetvdb.com/series/78874').json()

Yes, we need to make addon developers follow this convention, but the same apply to the hypothetical new caching API - we need to make them use it all the same.
Reply
#5
There are plenty of existing caching modules for python, I just felt this was the best way to attempt unification amongst addons. Maybe i'm wrong, Last thing I'd like to do is waste time coding something unusable. Wink
BTW this also had the hidden benefit of being able to tally Kodis impact on the "NET" beyond individual project keys.

Thanks for the helpful info Smile

If I had to suggest one meta project that should become the "norm" for plugin devleopers, I'd suggest @marcelveldt's https://github.com/marcelveldt/script.mo...adatautils

His caching mechanism is an ingenious mix of kodis window properties and sql, feel dumb I didn't think of it first Tongue
Image Lunatixz - Kodi / Beta repository
Image PseudoTV - Forum | Website | Youtube | Help?
Reply
#6
+1
i was exactly thinking the same thing after discussion, a forced, front cache (local) .

To have less affect on user implementation, imho best approach is to implement http 1.1 caching mechanism following expire tag in the http header through head requests, like a modern browser.

Time based caches are not suitable for all applications.

Lru based implementations are not affective also.

There are also modules for requests doing this, but for urllib2 there is no good implementation afaik.
Reply
#7
I've looked into the same idea some time ago. Normal Audio/Video/Picture add-ons could profit from this in regards of the loading time.

On the other hand, I have the feeling that we shouldn't cache the raw http page, but the extracted results instead. This would reduce file sizes and the waiting time on low power systems by eliminating the scraping of the file.


I've wrote an minimal abstaction layer for my video add-ons, which allows me to store any results as a json file. I've used this for a Krypton related workaround.

My next version of the Unithek (a german equivalent to USTV VOD) will feature a static cached list of all provided shows from the national TV channels. A planned version will fetch updates directly on the clients.


The "API" is quite simple. The content plugin itself passes a list of entries (dicts) to the wrapper, which handles everything xbmc* related (listitem stuff, xbmcvfs, ...).

Here is an example encoded as json file:
Code:
{

    "list":[
{"_type": "shows", "_assetId": "100000004", "_tvshowtitle": "3satbuchzeit", "_originChannelId": "100000004", "_channelLogo": "http://www.3sat.de/mediaplayer/contentblob/logos/3sat_91x17.jpg", "_name": "3satbuchzeit", "url": "http://www.3sat.de/mediathek/xmlservice/web/aktuellste?maxLength=50&id=100000004", "mode": "xmlListPage", "_url": "http://www.3sat.de/mediathek/?red=buchzeit", "_fanart": "http://www.3sat.de/mediaplayer/stills/100000004_946x532.jpg", "_plot": "Buchzeit - viertelj\u00e4hrlich, anl\u00e4sslich der Buchmessen in Frankfurt und Leipzig, zu Beginn der Sommerferien und zu Weihnachten.  Im Szenelokal \"Oosten\" am Frankfurter Osthafen...", "_channel": "3sat", "_duration": "10", "_thumb": "http://www.3sat.de/mediaplayer/stills/100000004_946x532.jpg"},

        {"_type": "shows", "_assetId": "100000006", "_tvshowtitle": "Ab 18!", "_originChannelId": "100000006", "_channelLogo": "http://www.3sat.de/mediaplayer/contentblob/logos/3sat_91x17.jpg", "_name": "Ab 18!", "url": "http://www.3sat.de/mediathek/xmlservice/web/aktuellste?maxLength=50&id=100000006", "mode": "xmlListPage", "_url": "http://www.3sat.de/mediathek/?red=ab18", "_fanart": "http://www.3sat.de/mediaplayer/stills/100000006_946x532.jpg", "_plot": "Die Sendereihe pr\u00e4sentiert neue Dokumentarfilme, die in die Erlebnis- und Gef\u00fchlswelt junger Erwachsener eintauchen und spannende Geschichten von Erwachsenwerden heute, von Entdeckungen und...", "_channel": "3sat", "_duration": "6", "_thumb": "http://www.3sat.de/mediaplayer/stills/100000006_946x532.jpg"}
    ],
    "cachetime":"TODO cachetime",
    "scriptname":"lib3sat",
    "scriptpath":"script.module.lib3sat",
    "ttl":"TODO ttl",
    "channel":"3sat"

}

I'm not saying that this is the way things should get done, but it might provide a bit of input for some of you.
Reply
#8
(2017-05-10, 00:29)membrane Wrote: On the other hand, I have the feeling that we shouldn't cache the raw http page, but the extracted results instead. This would reduce file sizes and the waiting time on low power systems by eliminating the scraping of the file.

Yes, Sorry thought it was implied i'd cache in json, raw http is usually bad practice Smile

I'm guessing by the lack of activity here this idea is the wrong approach....
Image Lunatixz - Kodi / Beta repository
Image PseudoTV - Forum | Website | Youtube | Help?
Reply
#9
(2017-05-10, 23:33)Lunatixz Wrote: I'm guessing by the lack of activity here this idea is the wrong approach....

It depends. Yes, re-implementing a web-client would be a waste of time, but some common cache for public JSON/XML APIs is a sound idea. BTW, I totally forgot that now Kodi scrappers will be Python-based and they should have some caching mechanism. So it wold be good if this scraper cache could be available to other Python addons, so they could share raw data received from respective APIs. I haven't been followed the Python scrapers development part and don't know if some caching mechanism already in place, but the whole idea needs further consideration.
Reply
#10
(2017-05-11, 10:21)Roman_V_M Wrote:
(2017-05-10, 23:33)Lunatixz Wrote: I'm guessing by the lack of activity here this idea is the wrong approach....

It depends. Yes, re-implementing a web-client would be a waste of time, but some common cache for public JSON/XML APIs is a sound idea. BTW, I totally forgot that now Kodi scrappers will be Python-based and they should have some caching mechanism. So it wold be good if this scraper cache could be available to other Python addons, so they could share raw data received from respective APIs. I haven't been followed the Python scrapers development part and don't know if some caching mechanism already in place, but the whole idea needs further consideration.

Yeah, forgot about python scrapers... I noticed the commits a while ago, never looked into it. Perhaps the team already introduced a caching system.
Image Lunatixz - Kodi / Beta repository
Image PseudoTV - Forum | Website | Youtube | Help?
Reply
#11
(2017-05-12, 20:56)Lunatixz Wrote: Yeah, forgot about python scrapers... I noticed the commits a while ago, never looked into it. Perhaps the team already introduced a caching system.

Currently Python scrapers are WIP and no caching has been implemented yet.
Reply
#12
my .02c; a big part of the reason for using python for scrapers is flexibility. this flexibility includes implementing whichever caching mechanism is appropriate. but core should not orchestrate this. keep it on the python side.
Reply
#13
(2017-05-09, 20:10)Roman_V_M Wrote:
(2017-05-09, 14:46)Lunatixz Wrote:
(2017-05-09, 12:10)Roman_V_M Wrote: I'm not a C++ developer (although I know basics) and more a Python guy, but why do we need this? I agree that urllib/urlib2 are not very convenient but requests has very descent and Pythonic API. I guess you can expose curl to Python or even write your own implementation, for example with boost::asio, but what is the point? I'm not criticizing your idea, I'm just trying to understand the rationale behind it.

Sole reason is to provide a universal cache to offload the burden to apis like Trakt and Fanart.tv.

Perhaps its not the correct approach? As it is now individual plugins that access apis are responsible for cache management. Since its not standardized you'll have two plugins potentially accessing the same url and storing identical data to their respective cache.

I see your point. However, quick googling brought me this: https://github.com/reclosedev/requests-cache. So maybe no need to invent the wheel and use existing tools? Caches can be stored in Kodi's temp folder. All we need is a common naming convention for them, for example tvdb.sqlite, tmdb.sqlite and such. Something like this (I skipped the authentication part):

Code:
import os
import xbmc
from requests_cache import CachedSession

tempdir = xbmc.translatePath('special://temp').decode('utf-8')
session = CachedSession(os.path.join(tempdir, 'tvdb'))
result = session.get('https://api.thetvdb.com/series/78874').json()

Yes, we need to make addon developers follow this convention, but the same apply to the hypothetical new caching API - we need to make them use it all the same.

I actually like this idea. Only problem I can think of is when addons with different requests-cache versions try to access it.
Reply
#14
(2017-05-17, 11:45)Razze Wrote: I actually like this idea. Only problem I can think of is when addons with different requests-cache versions try to access it.

We can add requests-cache as a module addon, as it is done with requests, BeautifulSoup and other commonly used Python libs. What I also like in requests-cache is that it can transparently monkey-patch requests which can be useful if requests is used inside some third-party library that you can't/won't modify.
Reply
#15
I'm not sure if that solves the problem I was trying to point out.

But hopefully the cache format is stable and kept compatible from version to version.

Edit: You might be right Kodi will force most people to use the latest version, so it might work nicely.
Reply

Logout Mark Read Team Forum Stats Members Help
Proposal - request/urllib wrapper.0