2009-08-11, 03:50
smeehrrr Wrote:A couple of observations on the latest rev of the code:
1) ScraperManager isn't as tolerant of malformed scrapers (several of which appear to be included with XBMC) as the previous version. Adding a catch on XmlException in ScraperManager() fixes that problem and skips the bogus scrapers.
2) The various Get*Details methods on ScraperManager actually modify the ScrapeResultsEntity passed in in such a way that calling the function twice with the same ScrapeResultsEntity leads to the second call failing, because instead of a single Url it now has a bunch. I'm not sure if this is correct behavior or not, but it was certainly unexpected. To get around this I added a Clone() method to ScrapeResultsEntity and have the Get*Details calls clone the input parameter before use, which works for me but may not be the intended usage. Are those calls supposed to return information in the resultsEntity parameter?
1) I use the latest svn scrapers in Each release, and it has no problem loading any of the scrapers (or not loading in the event of failure (the catch for this is in ScraperInfo - not scraperManager) , not quite understanding what you're talking about, If the xml is malformed the scraper manager won't load it. Proper formation of the xml is the responsibility of the scraper writer, not scraperxml.
2) never did this to me, let me know how you're calling it, and scraper results entity is supposed to be able to support multiple urls (because that's the way the XBMC code works.. multiple urls for a result are possible (example tv.com which uses three urls in the getdetails function)