2012-06-20, 16:34
I made an add-on for Frodo that exactly does such a thing
http://forum.xbmc.org/showthread.php?tid=132714
http://forum.xbmc.org/showthread.php?tid=132714
Quote:June 20th, 2012 at 08:42 | #13
I’d tried the plugin, but on the screen on which you have to select which source directories you want to submit the list of directories is too long for my display (running 1080p res) and the list isn’t scrollable. This has to be solved otherwise at least my data will be useless/counterproductive.
(2012-06-16, 11:30)topfs2 Wrote: I'm not 100% I follow, I'd love some more examples. What I'm not sure is if you want the cnfiguration as an xbmc user or as xbmc (code using this engine) or as a scraper developer? I haven't 100% decided what will trigger the scanning, what I'm focusing on mostly now is when you know file X exist and want to gather data on it what to do. I'd love some thoughts on the actual scanning process too if its of interest in this project.
(2012-06-20, 23:46)jmarshall Wrote: topfs2 is on holiday for the next 2 weeks.
If someone else could please take the script and fix it up it would be much appreciated.
(2012-06-27, 12:59)DonJ Wrote:(2012-06-16, 11:30)topfs2 Wrote: I'm not 100% I follow, I'd love some more examples. What I'm not sure is if you want the cnfiguration as an xbmc user or as xbmc (code using this engine) or as a scraper developer? I haven't 100% decided what will trigger the scanning, what I'm focusing on mostly now is when you know file X exist and want to gather data on it what to do. I'd love some thoughts on the actual scanning process too if its of interest in this project.
I think the file scanning process should be completely decoupled from the scraping process. Hence the process would be as follows:
1) "Something" finds a file (this something might be an addon, it might be an external tool which notifies xbmc via e.g. JSON or it might be the integrated xbmc file scanner)
2) The path to the file or directory is pushed to the scraper which starts the process to gather meta data via xml scrapers etc.
Therefore, the important part imo is to create a good/easy to use api to push paths/directories to the scraper. The same api should probably also allow the deletion of data from the library.
I think this would really open up the file scanning process to third party addon/tool developers. Hope this helps at all.
(2012-06-27, 12:59)DonJ Wrote:(2012-06-16, 11:30)topfs2 Wrote: I'm not 100% I follow, I'd love some more examples. What I'm not sure is if you want the cnfiguration as an xbmc user or as xbmc (code using this engine) or as a scraper developer? I haven't 100% decided what will trigger the scanning, what I'm focusing on mostly now is when you know file X exist and want to gather data on it what to do. I'd love some thoughts on the actual scanning process too if its of interest in this project.
I think the file scanning process should be completely decoupled from the scraping process. Hence the process would be as follows:
1) "Something" finds a file (this something might be an addon, it might be an external tool which notifies xbmc via e.g. JSON or it might be the integrated xbmc file scanner)
2) The path to the file or directory is pushed to the scraper which starts the process to gather meta data via xml scrapers etc.
Therefore, the important part imo is to create a good/easy to use api to push paths/directories to the scraper. The same api should probably also allow the deletion of data from the library.
I think this would really open up the file scanning process to third party addon/tool developers. Hope this helps at all.
(2012-07-01, 04:57)lboregard Wrote: on a similar note, it should be possible to push ids (imdib, tmdbid, tvdbid and such) to the scrapers in a way that it can be called from the json-rpc interface that's under development.
(2012-07-04, 10:34)topfs2 Wrote:(2012-06-27, 12:59)DonJ Wrote:(2012-06-16, 11:30)topfs2 Wrote: I'm not 100% I follow, I'd love some more examples. What I'm not sure is if you want the cnfiguration as an xbmc user or as xbmc (code using this engine) or as a scraper developer? I haven't 100% decided what will trigger the scanning, what I'm focusing on mostly now is when you know file X exist and want to gather data on it what to do. I'd love some thoughts on the actual scanning process too if its of interest in this project.
I think the file scanning process should be completely decoupled from the scraping process. Hence the process would be as follows:
1) "Something" finds a file (this something might be an addon, it might be an external tool which notifies xbmc via e.g. JSON or it might be the integrated xbmc file scanner)
2) The path to the file or directory is pushed to the scraper which starts the process to gather meta data via xml scrapers etc.
Therefore, the important part imo is to create a good/easy to use api to push paths/directories to the scraper. The same api should probably also allow the deletion of data from the library.
I think this would really open up the file scanning process to third party addon/tool developers. Hope this helps at all.
This is a good thing and I will for sure keep it in mind, the first version will not do any part of the file finding but will rather be given files it is meant to scan. Later we could add file finders in python aswell
(2012-07-01, 04:57)lboregard Wrote: on a similar note, it should be possible to push ids (imdib, tmdbid, tvdbid and such) to the scrapers in a way that it can be called from the json-rpc interface that's under development.
You want to be able to ask the engine for information about a specific movie even if its not scraped and has a file coupled to it? e.g. I don't have avatar file but a remote could still ask for the data about it? If so that is a very good suggestion and I will keep it in mind!