2013-06-02, 22:34
Hi.
I'd like to extend XBMC with some proprietary natural user interaction device. So that this device can be a remote control, most importantly to shift select up/down/left/right and to confirm (<enter>). I already have C++ code for that device, which I'd like to make use of. This code generates events whenever an action is performed, but also before that, when "it" thinks the user is about perform an action, and keeps updating a "intention of the user" for the actions (i.e. it provides continuous feedback to the user, this is crucial for this kind of interaction to work). When integrating the C++ code into other applications, I already found some visualizations for displaying the intentions. They work well, and I'd like to use them in XBMC, too.
I'm completely new to XBMC development and I've spent some time now figuring out a way how to integrate this. It seems that it is possible write so-called addons in Python. The API that XBMC offers to Python addons unfortunately won't cut it for me. I saw that it is possible to move the selection up/down/left/right etc., and to create a small set of pre-selected widgets (e.g. progressbars, checkboxes, windows, etc.), but the visualization of the user's intention requires UI elements which are not offered by xbmc.gui. What I'd basically like to have is the freedom to provide pixmaps/bitmaps (with alpha channel) that contain my own visualization.
Is that possible? If so, how? Are there guides explaining how to develop C++ DLL addons? (Oh, I almost forgot to mention, I'm using Microsoft Windows 7 and my priority is to get it to work there, but a platform-independent solution would be a nice extra).
Cheers!
NZ912
I'd like to extend XBMC with some proprietary natural user interaction device. So that this device can be a remote control, most importantly to shift select up/down/left/right and to confirm (<enter>). I already have C++ code for that device, which I'd like to make use of. This code generates events whenever an action is performed, but also before that, when "it" thinks the user is about perform an action, and keeps updating a "intention of the user" for the actions (i.e. it provides continuous feedback to the user, this is crucial for this kind of interaction to work). When integrating the C++ code into other applications, I already found some visualizations for displaying the intentions. They work well, and I'd like to use them in XBMC, too.
I'm completely new to XBMC development and I've spent some time now figuring out a way how to integrate this. It seems that it is possible write so-called addons in Python. The API that XBMC offers to Python addons unfortunately won't cut it for me. I saw that it is possible to move the selection up/down/left/right etc., and to create a small set of pre-selected widgets (e.g. progressbars, checkboxes, windows, etc.), but the visualization of the user's intention requires UI elements which are not offered by xbmc.gui. What I'd basically like to have is the freedom to provide pixmaps/bitmaps (with alpha channel) that contain my own visualization.
Is that possible? If so, how? Are there guides explaining how to develop C++ DLL addons? (Oh, I almost forgot to mention, I'm using Microsoft Windows 7 and my priority is to get it to work there, but a platform-independent solution would be a nice extra).
Cheers!
NZ912