2013-02-01, 23:03
Hi guys,
I'm developing an ambilight add-on for Philips Hue lights. My current approach is to call xbmc.RenderCapture(), calculate the average color of the capture, and adjust the room lights to this color. It works great as a proof of concept, but the lights have a slow reaction time. It'll take them 1 second to adjust to the new color. The result is that it "works" but that the color of the lights is 1 second behind on the video stream. The time is spent by the lights. I don't see a way to improve the speed there.
I'm wondering if it's possible to use XBMC's video buffer to work around the problem. While watching a movie, XBMC will buffer the video. What I would like to do is: "Use xbmc.RenderCapture() to get the current frame, but if there is already some stuff in the buffer, use that and return the frame that will be shown 1 second from now".
Is this at all possible? And is it already implemented in the API?
Or, if you have any other work-arounds/suggestions, I'm all ears
I'm developing an ambilight add-on for Philips Hue lights. My current approach is to call xbmc.RenderCapture(), calculate the average color of the capture, and adjust the room lights to this color. It works great as a proof of concept, but the lights have a slow reaction time. It'll take them 1 second to adjust to the new color. The result is that it "works" but that the color of the lights is 1 second behind on the video stream. The time is spent by the lights. I don't see a way to improve the speed there.
I'm wondering if it's possible to use XBMC's video buffer to work around the problem. While watching a movie, XBMC will buffer the video. What I would like to do is: "Use xbmc.RenderCapture() to get the current frame, but if there is already some stuff in the buffer, use that and return the frame that will be shown 1 second from now".
Is this at all possible? And is it already implemented in the API?
Or, if you have any other work-arounds/suggestions, I'm all ears