(2015-03-06, 10:03)Tolriq Wrote: Anyway not 100% related but if implementation can take care of also possibly send video it would be cool, as there is demand also for livetv / addons streaming.
While pure streaming of those without being played require more complex things as an other internal player to stream, having a way to stream what is currently playing with video would be cool.
And I suppose that if this is keep in mind even if not done in this project if the code is think to allow that it would be easy to add support later.
That's true, I didn't think about live TV before. I will keep this in mind, even if it is probably not going to happen in this GSOC.
(2015-03-06, 18:18)mkortstiege Wrote: While we're at it. Complete rewrite in HTML5 Hah ..
That one is on my wishlist too, but I don't really know HTML :/ I hope someone will take care of it one day.
(2015-03-06, 21:43)Paxxi Wrote: Can't this be implemented with upnp? The remote would act as a renderer and kodi as the controller and server. Not sure how it would work with syncing the playback though.
One benefit of using upnp is that it would make the feature usable with a lot of stuff besides our remotes.
This could technically be feasible to send audio on other devices (and it's already done), but syncing is another beast. While the implementation I am thinking of isn't based on UPnP, this could be an interesting project too. Maybe if I can complete the implementation ahead of the schedule (a point on which I am reserved), I'll look into it.
(2015-03-07, 11:03)topfs2 Wrote: Upnp has been used successfully with pulseaudio before, so that could also be a possible path.http://askubuntu.com/questions/187086/how-do-i-set-up-live-audio-streams-to-a-dlna-compliant-device
Personally I think that doing it in JSON RPC is probably easier. And someone could duplicate the code to do it for upnp too.
I don't use pulseaudio on my desktop, but does anyone has returns on latency with it ?
(2015-03-07, 11:14)Tolriq Wrote: I have no idea about how audio sync is done but since the delay is obviously always positive it means that Kodi have to send the audio data before it happens on Kodi itself so there's need for a way to keep this delay in sync via something.
I do not know if UPNP have such way to calculate and announce this delay.
Playing audio to anything will be quite easy in the end it's just streaming packets, the only hard part will be to have a proper sync system that is possible to implement on many client devices and does not relies on ultra specific things.
I don't know either about the existence of such a functionality in UPnP, and I doubt it exists one. But we technically could send some UPnP control messages to measure latency, and then compensate accordingly. However, this would assume there is no caching/latency involved with the remote player, otherwise we're screwed, and it only remains two possibilities : either let the user manually compensate the delay (which isn't really what I call a good experience), or make a database which contains latencies for different players, etc.
Both are not a perfect solution, since the device the application runs on, its Wi-Fi drivers, Wi-Fi signal strength... all of this could cause delays.
Maybe a better solution would be a mix of all of these : automatically calculate the delay, and compensate accordingly, retrieve player and device latency from a database, and let the user compensate, maybe even send those results to enrich the database.
An other problem is that, if latency changes (if you're moving around with a Wi-Fi device, start/stop a download, etc), you would have to correct the compensation, probably creating a temporary lag/freeze/audio distorsion, and maybe stopping the remote player, depending on both the server and client implementation.
So, I don't think audio streaming over UPnP is a trivial problem, but I don't think it's unsolvable either. In any case, it's probably too much for this GSOC (I could try to make a basic implementation If I have time to do so, but that will probably be about it).
I hope I've addressed some of your questions about UPnP. I am not an authority on the topic, so we could discuss my answers, but these are my first thoughts concerning this problem.
EDIT : sorry, I forgot to answer you, wisler.
(2015-03-07, 09:47)wisler Wrote: Hi @ll,
the idea with a WiFi sink is very awesome.
@M@yeulC
I don't know if you have seen our ADSP-System, but if you implement it as a AESink I think it would be possible to do audio signal processing at the server side and stream the samples through the WiFi client.
Also a multizone audio setup is possible.
Or with my adsp.xconvolver addon to do 3D audio through headphones. This would be really great, because the calculation is still at server side and the client could be a low budget device (Raspbery Pi, Android Phone,...).
Thank you for your indications. This could indeed be very handy for me, as I was thinking about a way to do this conversion, to avoid having to implement a multi-format playback engine. The only downside of that is that I only have a Linux machine, thus this software wouldn't work for me.
However, I don't understand how a convolution library could help me to do 3D audio :/ It seems to be more related to HRTF to me (of which I don't know the exact calculations). Am I missing something ?