Project idea : Sync audio with remote devices (Play audio on remote controls)
#31
I have no idea about how audio sync is done but since the delay is obviously always positive it means that Kodi have to send the audio data before it happens on Kodi itself so there's need for a way to keep this delay in sync via something.

I do not know if UPNP have such way to calculate and announce this delay.

Playing audio to anything will be quite easy in the end it's just streaming packets, the only hard part will be to have a proper sync system that is possible to implement on many client devices and does not relies on ultra specific things.
Reply
#32
Yup, which is also why I think starting with JSON RPC is preferred. As then we have the option of putting sync control were it makes sense (probably the audio sink).

I'm mostly pointing to upnp as it could be a fun next step if this project is proven to work.
If you have problems please read this before posting

Always read the XBMC online-manual, FAQ and search the forum before posting.
Do not e-mail XBMC-Team members directly asking for support. Read/follow the forum rules.
For troubleshooting and bug reporting please make sure you read this first.

Image

"Well Im gonna download the code and look at it a bit but I'm certainly not a really good C/C++ programer but I'd help as much as I can, I mostly write in C#."
Reply
#33
(2015-03-06, 10:03)Tolriq Wrote: Anyway not 100% related but if implementation can take care of also possibly send video it would be cool, as there is demand also for livetv / addons streaming.
While pure streaming of those without being played require more complex things as an other internal player to stream, having a way to stream what is currently playing with video would be cool.

And I suppose that if this is keep in mind even if not done in this project if the code is think to allow that it would be easy to add support later.

That's true, I didn't think about live TV before. I will keep this in mind, even if it is probably not going to happen in this GSOC.

(2015-03-06, 18:18)mkortstiege Wrote: While we're at it. Complete rewrite in HTML5 Smile Hah ..

That one is on my wishlist too, but I don't really know HTML :/ I hope someone will take care of it one day.
(2015-03-06, 21:43)Paxxi Wrote: Can't this be implemented with upnp? The remote would act as a renderer and kodi as the controller and server. Not sure how it would work with syncing the playback though.

One benefit of using upnp is that it would make the feature usable with a lot of stuff besides our remotes.

This could technically be feasible to send audio on other devices (and it's already done), but syncing is another beast. While the implementation I am thinking of isn't based on UPnP, this could be an interesting project too. Maybe if I can complete the implementation ahead of the schedule (a point on which I am reserved), I'll look into it.

(2015-03-07, 11:03)topfs2 Wrote: Upnp has been used successfully with pulseaudio before, so that could also be a possible path.http://askubuntu.com/questions/187086/how-do-i-set-up-live-audio-streams-to-a-dlna-compliant-device

Personally I think that doing it in JSON RPC is probably easier. And someone could duplicate the code to do it for upnp too.

I don't use pulseaudio on my desktop, but does anyone has returns on latency with it ?

(2015-03-07, 11:14)Tolriq Wrote: I have no idea about how audio sync is done but since the delay is obviously always positive it means that Kodi have to send the audio data before it happens on Kodi itself so there's need for a way to keep this delay in sync via something.

I do not know if UPNP have such way to calculate and announce this delay.

Playing audio to anything will be quite easy in the end it's just streaming packets, the only hard part will be to have a proper sync system that is possible to implement on many client devices and does not relies on ultra specific things.

I don't know either about the existence of such a functionality in UPnP, and I doubt it exists one. But we technically could send some UPnP control messages to measure latency, and then compensate accordingly. However, this would assume there is no caching/latency involved with the remote player, otherwise we're screwed, and it only remains two possibilities : either let the user manually compensate the delay (which isn't really what I call a good experience), or make a database which contains latencies for different players, etc.
Both are not a perfect solution, since the device the application runs on, its Wi-Fi drivers, Wi-Fi signal strength... all of this could cause delays.
Maybe a better solution would be a mix of all of these : automatically calculate the delay, and compensate accordingly, retrieve player and device latency from a database, and let the user compensate, maybe even send those results to enrich the database.
An other problem is that, if latency changes (if you're moving around with a Wi-Fi device, start/stop a download, etc), you would have to correct the compensation, probably creating a temporary lag/freeze/audio distorsion, and maybe stopping the remote player, depending on both the server and client implementation.

So, I don't think audio streaming over UPnP is a trivial problem, but I don't think it's unsolvable either. In any case, it's probably too much for this GSOC (I could try to make a basic implementation If I have time to do so, but that will probably be about it).


I hope I've addressed some of your questions about UPnP. I am not an authority on the topic, so we could discuss my answers, but these are my first thoughts concerning this problem.

EDIT : sorry, I forgot to answer you, wisler.
(2015-03-07, 09:47)wisler Wrote: Hi @ll,

the idea with a WiFi sink is very awesome.

@M@yeulC
I don't know if you have seen our ADSP-System, but if you implement it as a AESink I think it would be possible to do audio signal processing at the server side and stream the samples through the WiFi client.
Also a multizone audio setup is possible. Blush
Or with my adsp.xconvolver addon to do 3D audio through headphones. This would be really great, because the calculation is still at server side and the client could be a low budget device (Raspbery Pi, Android Phone,...).

Thank you for your indications. This could indeed be very handy for me, as I was thinking about a way to do this conversion, to avoid having to implement a multi-format playback engine. The only downside of that is that I only have a Linux machine, thus this software wouldn't work for me.
However, I don't understand how a convolution library could help me to do 3D audio :/ It seems to be more related to HRTF to me (of which I don't know the exact calculations). Am I missing something ?
I need to improve my English skills, feel free to correct me ;-)
My GSOC 2014 post CANCELLED 2015 one
Reply
#34
(2015-03-07, 20:03)M@yeulC Wrote: Thank you for your indications. This could indeed be very handy for me, as I was thinking about a way to do this conversion, to avoid having to implement a multi-format playback engine. The only downside of that is that I only have a Linux machine, thus this software wouldn't work for me.
However, I don't understand how a convolution library could help me to do 3D audio :/ It seems to be more related to HRTF to me (of which I don't know the exact calculations). Am I missing something ?
No this is not only for Windows Wink It should work on all platforms. You can also compile the audio-dsp-addon-handling branch from alwinus under linux. I use LUbuntu in my living room and it works great.
Yeah the correct word is HRTF and it's still possible with my LibXConvolver library or later in Kodi with my adsp.xconvolver addon Big Grin
You can produce 3D audio with HRTF for e.g. if you use more than 8 audio channels Wink

(2015-03-07, 20:03)M@yeulC Wrote: An other problem is that, if latency changes (if you're moving around with a Wi-Fi device, start/stop a download, etc), you would have to correct the compensation, probably creating a temporary lag/freeze/audio distorsion, and maybe stopping the remote player, depending on both the server and client implementation.
I think latency is only a problem, when we have to sync video and audio. If we only wanna listen to music I think latency is no problem.
So I think the first step will be to implement a Ethernet-Sink. When all is working you could try to sync the audio.

Another question is: can you estimate the latency if you know the signal strength and the distance through e.g. the WiFi router?

Edit: I think Ethernet-Sink is a better name than WiFi-Sink
Latest news about AudioDSP and my libraries is available on Twitter.

Developers can follow me on Github.
Reply
#35
@wisler : Is the goal was only to listen to music it's already possible since there's access to the media Wink

The need and proposal is really for audio from a video and as such needs a sync.
Reply
#36
It would be nice for me if I could grab audio through my smartphone with a pair of headphones plugged in while the audio still plays through the Intel pch-hdmi while at my in law's watching a movie with them on their kodi box. They never learned how to watch movies without creating their own commentary tracks and Q/A
Reply
#37
(2015-03-11, 23:54)Tolriq Wrote: @wisler : Is the goal was only to listen to music it's already possible since there's access to the media Wink
Yeah sure you can access the media directly, but for me it's much cooler to let ActiveAE and AudioDSP process the data and stream that to a dedicated device. So you can get high quality audio processing (e.g. binaural rendering through headphones) without processing power from your client. Blush

(2015-03-11, 23:54)Tolriq Wrote: The need and proposal is really for audio from a video and as such needs a sync.
Yeah that is also really nice if we could stream processed video and audio. Big Grin
Latest news about AudioDSP and my libraries is available on Twitter.

Developers can follow me on Github.
Reply
#38
(2015-03-07, 09:47)wisler Wrote: The need and proposal is really for audio from a video and as such needs a sync.

That(s true, and that's what this project will focus on. However, the plan is to ultimately stream any audio that would be generated on the box to a remote device, so you could easily do your processing server-side and get it client-side. One downside of this system is that it gets a little complicated if you start syncing video, since the post processing is done (as far as I know) in real time. So, it is not possible to send audio ahead of its playback in this case, and playback position needs to be adjusted on the sender, which may take it out of sync (see the answer to Dark_Slayer below). I may come up with a solution later, bur this will not be my main focus point. But keep in mind that I understood your request, and I will do my possible to leave room for a future implementation if it's not trivial to implement it by myself this summer.

(2015-03-20, 01:25)Dark_Slayer Wrote: It would be nice for me if I could grab audio through my smartphone with a pair of headphones plugged in while the audio still plays through the Intel pch-hdmi while at my in law's watching a movie with them on their kodi box. They never learned how to watch movies without creating their own commentary tracks and Q/A
This would definitely be possible, since the goal is not to alter sound playback on the server, but send one more audio stream to a client. Not completely sure if HDMI passtrough will be possible, though. I may need to modify the sound system to enable an arbitrary number of sinks, but I will address this issue in due time. This would also partially solve the issue quoted just before;
I need to improve my English skills, feel free to correct me ;-)
My GSOC 2014 post CANCELLED 2015 one
Reply
#39
@M@yeulC : Do not forget to correctly apply on Google site to be accepted Smile

Really looking forward this and will be happy to help on test from Yatse during work.
Reply
#40
@Tolriq : Thank you. I already applied on google-melange, but I am looking forward to any help I can get, especially on the android side, since I am not really familiar with it.
I need to improve my English skills, feel free to correct me ;-)
My GSOC 2014 post CANCELLED 2015 one
Reply
#41
@ M@yeulC : Might be worth looking into just sending a ping and using a Kalman filter for the network latency and something like a Larsen test for the mobile device latency.

Also look here pg 29
Reply
#42
Most needed for synchronised audio over airplay is available on my github page. Just a heads up.
Always read the XBMC online-manual, FAQ and search the forum before posting.
Do not e-mail XBMC-Team members directly asking for support. Read/follow the forum rules.
For troubleshooting and bug reporting please make sure you read this first.


Image
Reply
#43
OMG he's alive \o/
Reply
#44
(2015-04-19, 11:24)elupus Wrote: Most needed for synchronised audio over airplay is available on my github page. Just a heads up.

Thank you. I will check this out. Do you mean under your Shairplay repository ?
I wasn't planning to do it via airplay at first, but this sounds feasible. However, besides Kodi, I don't have any device able to receive an airplay stream.
That could ease the prototyping stage, though.
I need to improve my English skills, feel free to correct me ;-)
My GSOC 2014 post CANCELLED 2015 one
Reply
#45
Nah i don't think he meanse the airplay specific approach but the adaptions he did to support syncing it in general here:

https://github.com/elupus/xbmc/commits/shairplay_synch (so the kodi side of things to allow audio sync - this needs to change to be generic then).
AppleTV4/iPhone/iPod/iPad: HowTo find debug logs and everything else which the devs like so much: click here
HowTo setup NFS for Kodi: NFS (wiki)
HowTo configure avahi (zeroconf): Avahi_Zeroconf (wiki)
READ THE IOS FAQ!: iOS FAQ (wiki)
Reply

Logout Mark Read Team Forum Stats Members Help
Project idea : Sync audio with remote devices (Play audio on remote controls)3