Current state and vision about AudioDSP
#27
So after 4 months of silence in the AudioDSP subforum, I would say it's time to give you a brighter overview what I have done in this period of time. So expect more text than the usual Twitter messages. Tongue

I think that I started to think about a rewrite of AudioDSP during Christmas last year. To understand some concepts, I started to write some demo code to see how it would work. Some parts of the old AudioDSP implementation were fine and some are more or less wrong and didn’t work how they should. The first thing which came into my mind were the mode categories/groups (pre-, post-processing, master …), which are awesome and allow many more configurations than the usual available AVRs ever did. Also the possibility to order processing modes is great. But these two functionalities increased the complexity for both the implementation and for the end user. To be honest from the beginning I didn’t meet any beginners or experts that fully understood these two concepts.

So I decided that the categories should only be present for expert or advanced (this might need a further discussion) users. This was the reason why I completely removed this functionality in my latest implementation. I will add it in a later version, and then an expert user will be able to define own groups for processing modes. Another issue was that behind these groups is a logic, e.g. a master processing mode couldn’t be used after a post-processing mode. This would only be possible to achieve in code and not from the UI. Consequently I also removed this logic and you will be able to order the processing modes in any way you wish.

Here is a prototype screen shot of my new AudioDSP Manager dialog. Sorry the dialog currently doesn't have any real functionality, which is the reason why no processing modes are present.

Image

Another issue was that we were not able to use many of the available FFMPEG algorithms and filters, which are present in Kodi. It would have taken too much effort to make them available from AudioDSP. It wasn’t really possible to add new algorithms without rewriting a lot of code. Furthermore the existing stereo upmixing algorithm was already integrated into AudioDSP but I guess no one really used it. The corresponding setting for it was very hidden. That’s why I think it was not integrated in a very nice way. This was the reason why I started to define an interface (abstract class) for processing modes and now the add-ons or the internal modes in Kodi use it in the same way to perform audio processing. So AudioDSP doesn’t really care if the processing is done with an add-on or with FFMPEG. Currently the existing upmixer, channel remapper and resampler (all was available through one class) is embedded into the new AudioDSP interface.

A further issue was the annoying AudioDSP enable button. I decided to completely remove it, which is why AudioDSP is now always enabled. If you don’t decide to configure the processing chain the system automatically configures the already existing ActiveAE functionality with resampling, upmixing and so on.

Here is a screen shot of the current system audio settings.
Image

The next issue was that the AudioDSP system was living in the buffer pool of ActiveAE and wasn’t able to directly talk to ActiveAE, Kodi’s audio engine. This resulted in many issues if the output format of AudioDSP didn’t suite the format required by ActiveAE. Consequently RPi, OS X and Android were broken for a long time. This was the reason why I implemented a new buffer for AudioDSP, which is specific for signal processing and later can talk to ActiveAE. This buffer also uses an interface and it is still possible to switch between the new AudioDSP buffer and the already existing buffer from ActiveAE. The old one will still survive because it is used for passthrough audio, but I guess if AudioDSP V2 will be around there is no need to use it except if you want to listen to the full DTS:X or Atmos audio tracks. It’s a pitty that there is so far no open source decoder around for these formats.

Also the time stamp calculation wasn’t done inside AudioDSP and resulted in synchronization errors. That’s why I moved this functionality inside AudioDSP, which offers a lot more flexibility for a developer.

Another issue was that the entire AudioDSP system was derived from the PVR extension point. You might guess that this doesn’t fit very well to the AudioDSP requirements. So I rewrote most of the codebase and decided to use some design patterns like Model View Controller and Active Object for add-on, AudioDSP settings and AudioDSP buffer creation management.

As you might already know my preferred operating system is Windows for gaming, developing and using Kodi. Consequently AudioDSP V2 started to work on this platform. On some quiet days I ported my work with minimal effort (setup the build environment, some small compiler errors and tweaks) to my Nvidia ShieldTV (Android), to my Ubuntu machine and to my new Mac Mini. So I’m more or less able to test AudioDSP V2 on Android, Ubuntu, OS X and Windows. The only common platform that is currently missing is Raspberry Pi, but this is only the case because I didn’t find some time to setup my build environment for it. The new AudioDSP V2 code base is so much better for cross-platform audio processing than the old one.

The new codebase is so much more flexible and better to maintain than the old one. That’s why it took only one week (~1-2h a day) to write a wrapper (two new objects and small AudioDSP V2 API changes) to support the old AudioDSP V1 API. During the last days I tested it with the old adsp.freesurround add-on code base.

You might have missed it on Twitter but last week I was able to do some real first multichannel audio listening tests on Windows and Ubuntu. This wasn’t possible before, because I had a really annoying bug inside AudioDSP V2. Consquently the video and audio synchronization was broken for several weeks. The issue was present because AudioDSP V2 decided to drop audio buffer pointers, which is why they were lost in memory.

I really hate to say it but AudioDSP V2 does not yet suit the needs of daily use because the UI is missing and it has some edge cases that will crash the entire Kodi process. I am working really hard to achieve a version usable daily, but I currently can't say when this will be available. So please don't ask me for a release date, it will be released "When it’s done!" (3D Realms).
When I find some free spare time to work on the code base it gets closer and closer to a better and stable core system.

Disabling AudioDSP within the Kodi V17 final release gave me a lot more time to focus on AudioDSP V2. To be honest I didn’t receive any bug reports or users complaining that it doesn't work and blaming AudioDSP V1. So the new model and process worked very well for me, and I will continue it as long as I think AudioDSP V2 isn’t ready for a public test build. Furthermore I guess that you will not want to use AudioDSP V2 while I am not using it in my own living room. If something goes wrong it could blow your ears. Confused But once I start using AudioDSP V2 in my own living room, then the bug hunting sessions begin and a public test build will be available after a few weeks/days of intensive testing.

These are the next steps I wanna go with my code base:
  • Refactor the old AudioDSP add-on API, so that it will suit the needs for better and easier integration into AudioDSP V2
  • Rework the AudioDSP V2 UI
  • Port the existing add-on codebases to the new add-on API
  • Develop and publish the adsp.xconvolver, adsp.dynamics and adsp.volume add-ons Big Grin
Latest news about AudioDSP and my libraries is available on Twitter.

Developers can follow me on Github.
Reply


Messages In This Thread
RE: Current state and vision about AudioDSP - by AchimTuran - 2017-07-06, 21:35
Logout Mark Read Team Forum Stats Members Help
Current state and vision about AudioDSP6