2010-01-07, 12:46
well okay i guess that settles it then..
i can certainly use the xml http api but i don't know about client side caching. Scrappers caching seems to only last the time of a single update process.
On the wiki page, it looks like the studio and cast are missing from the api description, but that's not so bad, although xbmc can use cast names to cross reference xbmc library entries for a specific person which is quite nice.
Maybe someone can elaborate on another "time to live" based cache for the scrappers ?
But i fear that's only something that could be done through scripts or plugins.
It would at least most certainly be required to dowload the anime database in order to perform searches for aids.. or we could go through google of course as a second choice method.
Although cache purge doesn't seem to be handled by the scrapper itself, at least from what i saw in ScraperUrl.cpp, maybe the purge could honor some "time to live" in order not to remove the files right after the scrapping has been done.
We could do something like <url cache="file.xml" ttl="24">http://..</url>, ttl being set in hours, any devs around ?
@spiff
make sure you read about the ban rules for the http xml api, it would definitely require some more advanced caching mechanism on xbmc's side.
And if you want me to do the xml scrapper i'm fine with it i'm already half way anyways.
i can certainly use the xml http api but i don't know about client side caching. Scrappers caching seems to only last the time of a single update process.
On the wiki page, it looks like the studio and cast are missing from the api description, but that's not so bad, although xbmc can use cast names to cross reference xbmc library entries for a specific person which is quite nice.
Maybe someone can elaborate on another "time to live" based cache for the scrappers ?
But i fear that's only something that could be done through scripts or plugins.
It would at least most certainly be required to dowload the anime database in order to perform searches for aids.. or we could go through google of course as a second choice method.
Although cache purge doesn't seem to be handled by the scrapper itself, at least from what i saw in ScraperUrl.cpp, maybe the purge could honor some "time to live" in order not to remove the files right after the scrapping has been done.
We could do something like <url cache="file.xml" ttl="24">http://..</url>, ttl being set in hours, any devs around ?
@spiff
make sure you read about the ban rules for the http xml api, it would definitely require some more advanced caching mechanism on xbmc's side.
And if you want me to do the xml scrapper i'm fine with it i'm already half way anyways.