Clumsy Wrote:What I did when hacking around on plugins, was to enable debug logging in xbmc, then minimize xbmc, open a logviewer that can tail a logfile, there are several around for every system you want, on linux a console with "tail -f xbmc.log" works fine. Then hack away and test my plugin on the fly by switching back to xbmc
I found it helpful to make my basic functionality (the scraping and regular expressions to get the urls) in an ide outside of xbmc first. Well, not even an ide, on of those command line python interpretes worked fine for me, to realtime dabble around. When I had what I wanted, I integrated that step by step into a plugin. The last part is pretty easy, once you understand how the concept of a plugin works (i.e. that it kind of calls itself again on menu level changes if I recall that correct, has been over a year).
Btw: what kind of content are you looking at ? I personally don't car emuch for all those film/series streaming sites that randomly work for 2 weeks and then go down again, I am more of a fan of pages like ted.com (already a great plugin for that, but maybe there are similar sites) or other sites with free content that stay online for years. Have fun coding
Hi, I've been having a play with plugins over the past few days and your mention of an 'ide' outside of xbmc for development has got me thinking about a potential better solution for these types of plugins where the
main functionality of the plug rarely changes, only the 'scraping and regular expressions' stuff changes when the target websites fancy a design change etc.
The alternate solution is to host the 'scraping and regular expressions' in a separate function which is downloaded at plugin start time and imported into the runtime. Therefore, plugins could dynamically pick up the latest version of the function which would remove the need for all users to "SVN Repo" re-install the latest version of the plugin. It would also reduce the developers time in regening and uploading a new version of the plugin and as the function would potentially under the wiki version control anyone with RE skills could modify.
Obviously, the function download locations (e.g. xbmc.org/wiki/plugins/<plugin name>_<ver>.def) would need to be defined and configured, but this might make for a better model. I've not had much exposure to plugins and previous discussions so comments are welcome on whether this has already been discounted or alternative ways forward are already in the pipeline.
I used a local webserver to test the function download, import and use and it appears to work pretty well.
My sample code is shown below:
Code:
import urllib2,urllib,re
# URL from local webserver, although this would eventually be the XBMC.org wiki or similar website.
url='http://admin90:8080/python.html'
#WebPage Content is as follows (obviously without the #'s):
#<HTML><BODY><H1>
#def one():
# global temp
# temp=9876
# print "Temp: ", temp
#</H1></BODY></HTML>
#Get Webpage
req = urllib2.Request(url)
req.add_header('User-Agent', 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.0.3) Gecko/2008092417 Firefox/3.0.3')
response = urllib2.urlopen(req)
link=response.read()
response.close()
#Get Start Point of 'def one' function
p = re.compile('<HTML><BODY><H1>')
m=p.match(link)
m.group()
start = m.end()
# Get End Point of 'def one' function
q = re.compile('</H1></BODY></HTML>')
n=q.search(link)
n.group()
end = n.start()
# Write contents (i.e. def one function) to local file
filename = "dyn.py"
FILE = open(filename,"w")
FILE.writelines(link[start+1:end-1])
FILE.close()
# Import file
import dyn
# Run downloaded functionality.
dyn.one()
print "dyn.test = ", dyn.temp
Output:
Code:
>>>
Temp: 9876
dyn.test = 9876
>>>
Apologies for potentially hijacking this thread, but thought the comments in the thread were relevant to the discussion.