2010-09-20, 05:37
While I agree with what you're doing something odd seems to be happening.. not that it's xbmc's fault... it could be an alsa issue for all i know.
I modified line 219 of AESinkALSA.cpp. I changed the enum starting point to AE_FMT_S24BE and re-compiled. The resulting sound was much cleaner, though I can still hear some slight clipping when I listen to A Perfect Circle. You'll be interested to know that it initialized the audio at S16NE (NE = unsigned I assume?)
I don't think that 32bit processing should be selected for my soundcard. Check out this funness:
codec#3 is the hdmi device. codec#0 is the analog and s/pdif (as far as i know)
So, if we assume that the features of codec#0 are irrelevant to the conversation, why would 32bit be selected over 24bit when none of the hdmi nodes accept 32bit? And similarly, why *isn't* 24bit being grabbed when I circumvent the 32bit levels?
Codec: Realtek ALC883
Address: 0
Codec: Nvidia MCP78 HDMI
Address: 3
Or do you think I'm going about this all wrong? :)
Plus: my source was stereo (mp3/ogg) so the boost level on downmix option didn't seem to have much effect, but I will try it again once I pull the latest svn changes... another time though since its late and i have to work in the a.m. :(
I modified line 219 of AESinkALSA.cpp. I changed the enum starting point to AE_FMT_S24BE and re-compiled. The resulting sound was much cleaner, though I can still hear some slight clipping when I listen to A Perfect Circle. You'll be interested to know that it initialized the audio at S16NE (NE = unsigned I assume?)
I don't think that 32bit processing should be selected for my soundcard. Check out this funness:
Code:
# ~/xbmc-ae$ cat /proc/asound/card0/codec#3 | grep bits
bits [0x0]:
bits [0xf]: 8 16 20 24
bits [0xf]: 8 16 20 24
bits [0xf]: 8 16 20 24
bits [0xf]: 8 16 20 24
bits [0xf]: 8 16 20 24
# ~/xbmc-ae$ cat /proc/asound/card0/codec#0 | grep bits
bits [0xe]: 16 20 24
bits [0xe]: 16 20 24
bits [0xe]: 16 20 24
bits [0xe]: 16 20 24
bits [0xe]: 16 20 24
bits [0x1e]: 16 20 24 32
bits [0x6]: 16 20
bits [0x6]: 16 20
bits [0x1e]: 16 20 24 32
bits [0xe]: 16 20 24
codec#3 is the hdmi device. codec#0 is the analog and s/pdif (as far as i know)
So, if we assume that the features of codec#0 are irrelevant to the conversation, why would 32bit be selected over 24bit when none of the hdmi nodes accept 32bit? And similarly, why *isn't* 24bit being grabbed when I circumvent the 32bit levels?
Codec: Realtek ALC883
Address: 0
Codec: Nvidia MCP78 HDMI
Address: 3
Or do you think I'm going about this all wrong? :)
Plus: my source was stereo (mp3/ogg) so the boost level on downmix option didn't seem to have much effect, but I will try it again once I pull the latest svn changes... another time though since its late and i have to work in the a.m. :(