Kodi Community Forum
Xbmc not working for blind users. - Printable Version

+- Kodi Community Forum (https://forum.kodi.tv)
+-- Forum: Discussions (https://forum.kodi.tv/forumdisplay.php?fid=222)
+--- Forum: Kodi related discussions (https://forum.kodi.tv/forumdisplay.php?fid=6)
+--- Thread: Xbmc not working for blind users. (/showthread.php?tid=117199)

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30


RE: Xbmc not working for blind users. - ruuk - 2014-04-29

(2014-04-29, 07:33)eckythump Wrote: I hadn't noticed you linking to httpttsd, but that's cool, too. Once you're happy with your python server, I won't be at all offended if you want to just link to that. It's also been my observation that python stuff seems much easier to package up as standalone packages that people can install/run as proper windows services, and that'd definitely be a great goal (in time) for your python server.
I felt a little bad about basically replacing your perl script, but it seemed like a waste not leveraging all this backend code.
I do plan to eventually create a windows installer and make this work as a service. I've actually done this before, but that was about 8 or 9 years ago so I remember almost nothing Smile
(2014-04-29, 07:33)eckythump Wrote: One other thought I literally just had was that it might be good to add an option into the config section that lets you toggle between screen-reading mode (what we currently have) and a non-screen-reading mode. The latter essentially being silent all the time, but leaving your addon available for other future addons to leverage it if they want their addons to generate speech, for example if someone wanted to write an addon to speak subtitles.
I've been thinking for a while that it would be nice to have this provide a module for other addons so they could just for instance import xbmcspeech and then xbmcspeech.say("some stuff"). This will probably work best If I split out the backends into the xbmc module and have the service separate, importing the backends.
I do like having the service be able to work all in one addon with no dependencies because it makes it easier to install directly from the zip, and I could actually have it provide a module without separating the backends, but then if you disable the addon, the module will no longer be available, and I want the service to be disabled when "off" both because it then no longer uses any resources and because that allows for it to be included in an XBMC install where it can disable itself on first run. This is the only way so far I have come up with that that will provide pre-installed speech without affecting sighted users.

(2014-04-29, 19:26)popcornmix Wrote:
(2014-04-29, 03:57)ruuk Wrote: It seems to be certain length that plays, rather than some fraction of the total length. Something like 1 to 1.5 seconds.
While I'm sure it's not directly related, it seems to happen to wavs of approximately the same length as the crashing issue.

Yes, I've had another look, and have found the cause of the truncation. There was a 256K limit in size (after channel mapping/resampling).
I have a fix for that on newclock3 and gotham_rbp_backports branches. It should appear in a future build.

Thanks for checking on that and making a fix. I really appreciate the time you've taken to follow this.


v0.0.47 - ruuk - 2014-05-01

Added a new version to my repository: 0.0.47.

Get it or the repository from the Downloads Page.

Changes:
  • Changes to the HTTP Speech Server backend to work with the new python server

The python speech server is on the downloads page. Extract it somewhere and run server.py from the speech.server folder. You can get command line options with --help, but you generally won't need them. This should work on any platform when set to have the server speak, and will work on windows and unix platforms for wav serving.
It uses the same backends as the addon, and you can select them from the addon.

Just FYI, I noticed today installed Cepstral voices are available in SAPI.


RE: Xbmc not working for blind users. - eckythump - 2014-05-01

(2014-05-01, 02:25)ruuk Wrote: The python speech server is on the downloads page. Extract it somewhere and run server.py from the speech.server folder. You can get command line options with --help, but you generally won't need them. This should work on any platform when set to have the server speak, and will work on windows and unix platforms for wav serving.
It uses the same backends as the addon, and you can select them from the addon.

Just FYI, I noticed today installed Cepstral voices are available in SAPI.
I took a look at the python server. Looks great. It's certainly a lot better constructed than my perl thing.

I saw that you were able to use the wave module to generate the RIFF/WAVE header. That's good. My header creation code seemed to work just fine, but I feared there might be edge cases that broke it. I trust the wave module to do it 100% correctly.

I have one suggestion for you. If you look at the code for my most recent httpttsd.pl, you'll see that I've made the generation of the AudioFormat value more dynamic, pulling from variables set right at the top of the file. You might want to do something similar so users can easily edit and set this value without needing to read through the code to find it.

I've had some experiences where some voices sound great at some frequencies, and awful at others. The infovox voices deserve special mention here, as they sound great at 22050, and bizarrely, sound awful at 44100, so it's preferable to be able to set that value easily should a user find that their voice of choice sounds like arse at our chosen defaults.

If you were feeling extra tricky, you could probably have "bits", "channels" and "samplerate" as optional POST variables so this can be set on the client side, falling back to a server default when not. I'm not sure I trust my makeshift WAVE header generation code to work with all the different permutations, but it's certainly an option you could consider.

Thankfully almost every voice available on Windows hooks into the SAPI system. Cepstral, AT&T Natural, Loquendo, RealSpeak, NeoSpeech, Infovox, etc. Even the windows port of eSpeak has SAPI hooks. The only voices I've seen that don't are Eloquence (comes with JAWS) and flite/festival compiled under cygwin (For all I know there're native Windows ports that do have SAPI hooks for these).

I played with Cepstral Callie on the pi. I noticed that swift has a -o option for writing out a wav. I tried it, but it was very, very slow. Too slow on the pi to be practical, but it might be an acceptable work around the ALSA requirement on other linuxes if it can generate the wav files fast enough.

I'll try and give your python server a whirl soon and see how it goes. I've got a standard install of Python 2.7 on my windows machine. Is this what you're using, or should I get some other more magical python package?


RE: Xbmc not working for blind users. - ruuk - 2014-05-01

(2014-05-01, 05:22)eckythump Wrote: I took a look at the python server. Looks great. It's certainly a lot better constructed than my perl thing.

I saw that you were able to use the wave module to generate the RIFF/WAVE header. That's good. My header creation code seemed to work just fine, but I feared there might be edge cases that broke it. I trust the wave module to do it 100% correctly.
In the commented out code, I actually used a method based on your code. I originally was trying the wave module and it wasn't working. Then I was able to have SAPI write the whole thing to file, headers and all which did work. I wanted to try skipping the file though, so I tried your method which I wasn't getting to work. It turns out with comtypes the GetData() call was returning the data in the form of a tuple, so I used the array module to convert this to a string, and it then worked with both the wave module and writing the headers directly. I'm sure there must be a way to access the data directly rather than as a tuple of bytes. I'll have to delve a little deeper into comtypes. Too bad the documentation isn't better.
(2014-05-01, 05:22)eckythump Wrote: I have one suggestion for you. If you look at the code for my most recent httpttsd.pl, you'll see that I've made the generation of the AudioFormat value more dynamic, pulling from variables set right at the top of the file. You might want to do something similar so users can easily edit and set this value without needing to read through the code to find it.

I've had some experiences where some voices sound great at some frequencies, and awful at others. The infovox voices deserve special mention here, as they sound great at 22050, and bizarrely, sound awful at 44100, so it's preferable to be able to set that value easily should a user find that their voice of choice sounds like arse at our chosen defaults.

If you were feeling extra tricky, you could probably have "bits", "channels" and "samplerate" as optional POST variables so this can be set on the client side, falling back to a server default when not. I'm not sure I trust my makeshift WAVE header generation code to work with all the different permutations, but it's certainly an option you could consider.
Sounds like a good idea. I'll add it in.
(2014-05-01, 05:22)eckythump Wrote: Thankfully almost every voice available on Windows hooks into the SAPI system. Cepstral, AT&T Natural, Loquendo, RealSpeak, NeoSpeech, Infovox, etc. Even the windows port of eSpeak has SAPI hooks. The only voices I've seen that don't are Eloquence (comes with JAWS) and flite/festival compiled under cygwin (For all I know there're native Windows ports that do have SAPI hooks for these).

I played with Cepstral Callie on the pi. I noticed that swift has a -o option for writing out a wav. I tried it, but it was very, very slow. Too slow on the pi to be practical, but it might be an acceptable work around the ALSA requirement on other linuxes if it can generate the wav files fast enough.
I'm going to add in wav file output and with it the ability to serve wavs from the server, but someone else will have to test it. The unlicensed linux version doesn't allow output to a wav file. In fact it just writes the header. At first I wondered why, but then my devious mind realized you could just subtract the licensing spiel from the wav and play the rest. Strange that it's enabled on the Pi and not linux. Perhaps because it is useless on the Pi.
(2014-05-01, 05:22)eckythump Wrote: I'll try and give your python server a whirl soon and see how it goes. I've got a standard install of Python 2.7 on my windows machine. Is this what you're using, or should I get some other more magical python package?
That should work fine. Should work on 2.6+ I think.


RE: Xbmc not working for blind users. - eckythump - 2014-05-02

ruuk

Tried to play with the python server, but got import errors for numpy. I will revisit again later when I can find out how to install modules under Windows (I'm pretty much exclusively a python on BSD/Linux person).

My attempts to play with it uncovered a couple of bugs, though.

After I switched back from the python server to the perl server in the configuration setting, the string "SAPI." was being prepended to the voice variable being posted to the remote server. I believe this line of code is the culprit:
Code:
if voice: voice = '{0}.{1}'.format(self.engine,voice)
and needs to also test if we're using the perl or python server, and only prepend for python.

So after that, I nuked the userdata/addons/service.xbmc.tts/settings.xml file to remove any stale/cross-polluting settings and restarted.

After that, I discovered that the voice option has completely disappeared when the perl server is set to yes. Presumably it's hidden when self.engine isn't defined yet. Again, probably just need to check what server is being used and then only hide if self.engine undefined and using python server.

I'll see if I can do some more testing later today.


RE: Xbmc not working for blind users. - eckythump - 2014-05-02

ruuk

Haven't had a chance to play any more with the python server, but I have made an unrelated observation that I'd like you to try and reproduce.

I have a feeling that the usecache option for playsfx isn't working as desired.

When PLAYSFX_HAS_USECACHED is True, memory usage slowly climbs as more and more speech operations are performed, as if the caching isn't happening anymore, but the memory still isn't being released afterwards.

If I hardcode PLAYSFX_HAS_USECACHED = False, I can navigate through every item on a screen and the memory use will rise with each item, but once all items have been access (and thus cached) I can continue to navigate around that same screen and the memory usage stays static and doesn't rise at all.

Doing the same with PLAYSFX_HAS_USECACHED set to True will keep rising, even when revisiting previously said items.

Are you seeing this same behaviour?


This is on OpenELEC 3.95.7, FYI.


v0.0.48 - ruuk - 2014-05-02

Added a new version to my repository: 0.0.48.

Get it or the repository from the Downloads Page.

Changes:
  • Fixes to HTTP server backend to make it work again with the Perl server
  • Added version check to HTTP server and switch between Perl/Python servers where appropriate

To use the Python server with this version you will need to download the new version from the link on the downloads page.

(2014-05-02, 06:29)eckythump Wrote: ruuk

Tried to play with the python server, but got import errors for numpy. I will revisit again later when I can find out how to install modules under Windows (I'm pretty much exclusively a python on BSD/Linux person).
Sorry I didn't mention that. I forgot I had to install numpy to get it to work.
(2014-05-02, 06:29)eckythump Wrote: My attempts to play with it uncovered a couple of bugs, though.

After I switched back from the python server to the perl server in the configuration setting, the string "SAPI." was being prepended to the voice variable being posted to the remote server. I believe this line of code is the culprit:
Code:
if voice: voice = '{0}.{1}'.format(self.engine,voice)
and needs to also test if we're using the perl or python server, and only prepend for python.

So after that, I nuked the userdata/addons/service.xbmc.tts/settings.xml file to remove any stale/cross-polluting settings and restarted.

After that, I discovered that the voice option has completely disappeared when the perl server is set to yes. Presumably it's hidden when self.engine isn't defined yet. Again, probably just need to check what server is being used and then only hide if self.engine undefined and using python server.

I'll see if I can do some more testing later today.
All of this should be fixed in the new version, plus I added a version path to the python server and now the backend will check that and switch between Perl/Python mode depending on whether it reports the version. You will need to download the server again though.


RE: Xbmc not working for blind users. - ruuk - 2014-05-02

(2014-05-02, 13:07)eckythump Wrote: ruuk

Haven't had a chance to play any more with the python server, but I have made an unrelated observation that I'd like you to try and reproduce.

I have a feeling that the usecache option for playsfx isn't working as desired.

When PLAYSFX_HAS_USECACHED is True, memory usage slowly climbs as more and more speech operations are performed, as if the caching isn't happening anymore, but the memory still isn't being released afterwards.

If I hardcode PLAYSFX_HAS_USECACHED = False, I can navigate through every item on a screen and the memory use will rise with each item, but once all items have been access (and thus cached) I can continue to navigate around that same screen and the memory usage stays static and doesn't rise at all.

Doing the same with PLAYSFX_HAS_USECACHED set to True will keep rising, even when revisiting previously said items.

Are you seeing this same behaviour?


This is on OpenELEC 3.95.7, FYI.

I haven't tried to reproduce this yet. I just spent all my free time fixing the other stuff Smile

Here's some thoughts I had.
When you have useCached=True and the file name is the same, it ignores the file and just plays the stored wav. Obviously in this situation, nothing else will be loaded into memory.
When you have useCached=False and the file name is the same, it calls CAEFactory::FreeSound on the sound and deletes it from the mapped refrences.
I can see a few possibilities here. One is that FreeSound does not free the sound right away but somehow flags it for delete or something. I have no idea what FreeSound actually does so this is just guessing. If this is the case, then the memory may become available later.
Another possibility is that when useCached=False, the wav is read each time. Perhaps this filling the disk cache. Does the memory usage you mention include the disk cache? If so, then this would be fine because it is still available for use. My understanding of Linux disk caching and memory usage reporting is not great, so I may have something wrong here, so let me know if I've said something stupid Smile
I'm also not sure how writing or overwriting files to a tmpfs affect memory usage.

In any case, I still am going to try to duplicate the issue when I get a chance and see if I can figure anything out.
Please feel free to shoot down my assumptions and fill in the gaping holes in my Linux knowledge Smile


RE: Xbmc not working for blind users. - eckythump - 2014-05-03

(2014-05-02, 22:47)ruuk Wrote: I can see a few possibilities here. One is that FreeSound does not free the sound right away but somehow flags it for delete or something. I have no idea what FreeSound actually does so this is just guessing. If this is the case, then the memory may become available later.
Another possibility is that when useCached=False, the wav is read each time. Perhaps this filling the disk cache. Does the memory usage you mention include the disk cache? If so, then this would be fine because it is still available for use. My understanding of Linux disk caching and memory usage reporting is not great, so I may have something wrong here, so let me know if I've said something stupid Smile
I'm also not sure how writing or overwriting files to a tmpfs affect memory usage.

In any case, I still am going to try to duplicate the issue when I get a chance and see if I can figure anything out.
Please feel free to shoot down my assumptions and fill in the gaping holes in my Linux knowledge Smile
My knowledge of linuxes memory systems is also patchy at best, but as far as I understand any kind of system-wide cache, such as filesystem level caches will chew RAM in other processes, not the xbmc process. Same with tmpfs.

I have had a look just now after leaving my pi idle overnight and the memory usage for the xbmc.bin process is still at what it was at when I stopped fiddling last night.

After a bit more usage, it'll start to get sluggish and unresponsive until it's practically unusable and a reboot is necessary.

Look forward to hearing back what you discover when you test. I'd also be curious to hear back if you get the same behaviour on non-pi/non-openelec installs, too. I currently only have the pi to test on, so am never sure if an issue I spot is universal, open elec specific, or pi specific.

Thanks for the perl/python server interoperability fixes. Appreciated. All your time and effort on this is very appreciated.


RE: Xbmc not working for blind users. - eckythump - 2014-05-03

ruuk

Finally got around to playing with your speech server. Worked fine once I installed the numpy package.

There's little I can really say other than "it works!". From a users perspective, navigating xbmc with it, it was imperceptible from my perl one. Fast and responsive. If I can figure out how to run yours as a service, I'll ditch the perl one and run yours exclusively.

Oh, actually, there are two very minor things I noticed. It appears the speech server offers ttsd as one of the supported engines. It's a engine that can stream wavs, so I can see how it might end up in that list. Smile

The other minor issue was when I was first configuring the addon to use the python server. After I set perl server to no and selected the engine, it refreshed and perl server was set to yes again and I had to unset that and then choose the voice, and I think I had to unset it again before I went OK, too.

I took a few more minutes and just tested it on FreeBSD. It seemed to work fine. It detected Flite and eSpeak. I only tested flite. It also offered "Google" as an option. I selected it to see what would happen and xbmc immediately crashed as soon as I went OK, which wasn't a surprise. I imagine it offered this option because I have mplayer installed on the host server. If you wanted to offer Google via the speech server, you could probably use mplayer to convert to wav rather than play the mp3 and then deal with it like everything else, though it's probably be better to use mpg123 or sox if available as they're lighterweight if you wanted to go down that path.

But it looks good and those are fairly minor bugs and far from showstoppers. I look forward to seeing how future versions go.


RE: Xbmc not working for blind users. - ruuk - 2014-05-03

(2014-05-03, 15:53)eckythump Wrote: ruuk

Finally got around to playing with your speech server. Worked fine once I installed the numpy package.

There's little I can really say other than "it works!". From a users perspective, navigating xbmc with it, it was imperceptible from my perl one. Fast and responsive. If I can figure out how to run yours as a service, I'll ditch the perl one and run yours exclusively.
I'm messing around with running it as a service now. I've got it working, I just need to figure out how to respond to a request to stop the service and add some code to read settings from a file instead of the command line.
(2014-05-03, 15:53)eckythump Wrote: Oh, actually, there are two very minor things I noticed. It appears the speech server offers ttsd as one of the supported engines. It's a engine that can stream wavs, so I can see how it might end up in that list. Smile
Yeah I knew that that was there, I am just still deciding what method I want to use to handle engines I don't want displayed.
(2014-05-03, 15:53)eckythump Wrote: The other minor issue was when I was first configuring the addon to use the python server. After I set perl server to no and selected the engine, it refreshed and perl server was set to yes again and I had to unset that and then choose the voice, and I think I had to unset it again before I went OK, too.
I had the same issue. I thought I had fixed it when I changed something, but I guess I didn't. I'll figure out what the issue is.
(2014-05-03, 15:53)eckythump Wrote: I took a few more minutes and just tested it on FreeBSD. It seemed to work fine. It detected Flite and eSpeak. I only tested flite. It also offered "Google" as an option. I selected it to see what would happen and xbmc immediately crashed as soon as I went OK, which wasn't a surprise.
Yeah, if you don't pass a valid wav to playSFX(), XBMC does not like it. I have experienced this many times Smile
It shouldn't actually have crashed XBMC though, because it should have just played through mplayer instead of XBMC. I'll have to look into that.
(2014-05-03, 15:53)eckythump Wrote: I imagine it offered this option because I have mplayer installed on the host server. If you wanted to offer Google via the speech server, you could probably use mplayer to convert to wav rather than play the mp3 and then deal with it like everything else, though it's probably be better to use mpg123 or sox if available as they're lighterweight if you wanted to go down that path.
Yeah, it checks for mplayer before it shows as available. I'll have to add mpg123 as a supported player and have it check for either of those. At the moment it should also only show up as an option if you have "Play On Server" selected.


RE: Xbmc not working for blind users. - jhall - 2014-05-03

Hi,

I must say, I am quite impressed! When I last looked at accessibility with XBMC, probably the beginning of this thread, there was no forward movement. Then, within 2 months, this project went to a usable project.

some observations

I'm using on the PI, so I put the addon in on my existing 3.95.6 build. It was crashing on large wav files and so it was difficult to get the gist of things. I then grabbed the current build from snapshots (about a couple hours ago I guess) and not only is the crashing gone, but the chomping on the end of files appears to be gone too.

I put both the 0.0.48 and speech.server.zip in the addons dir, but I'm guessing I don't need the speech.server. It doesn't look like an addon anyway, now that I look through it a bit better.

I must say, I'm not a fan of the keyboard input method with the ridiculous remote I got, who ever heard of a remote that doesn't have numbers on it anyway! and in the end I just went in and edited the .xml settings file and added the proper values since the ip address input method was really annoying. There did not seem to be a way to make the pi read the input field so I could tell exactly what was in there. I can't imagine the fun I would have if I had to type in a WPA key or something.

The initial dialogue boxes (welcome to openelec etc) did not properly speak, and since this remote has no f2 or f3 buttons on it I didn't try to make it read, simply clicked on next until the main gui showed up. (previously I had used google goggles to get an idea on what might be going on)

Then I enabled live tv and enabled a pvr backend. There are 1,434 channels in my backend with 2 weeks of EPG for each, undoubtedly that caused the pi to be unhappy over memory. I scrolled down the list until I found a channel and clicked on it. The channel began playing, but suddenly speech was aborted and would not play I'm guessing because media was playing? Is there a way to pause the media or something so that it will speak long enough to figure out how to get out of it?

My wife had walked in while I was battling with the IP address input madness and offered to help. At some point the thing stopped speaking, which was just fine for us since it was being a pest. We kept clicking on ok until we got to the 'enable' stage of turning on the addon. Since clicking enable did not work, I walked to a computer, looked in the settings.xml file and rebooted the pi.

Maybe one solution to not reading the input is if you pause without moving the selector around for more than 3 seconds after the last sniffy sound, it will read the input value. At least then you could figure out what was going on. and then of course if you hit backspace, it could read immediately. I don't know whether the input field can be polled or if you just have to keep track of it on event or something, but it makes that sniffy noise all the time, I'm guessing you could hook onto that callback to determine what was clicked.

The last thing that happened when I was trying to click the 'back' button (or the one that I think is the back button, it seems to go back) the PI kept playing the content pulling from the PVR, stuttered for a few seconds, went red, and then we gave up for a while since my wife wanted to watch 'nonrmal' tv.

So, I'm quite pleased with the project so far. I'm guessing if you use a wireless keyboard or something it will be easier to input things.


RE: Xbmc not working for blind users. - Traker1001 - 2014-05-04

I haven't had a chance to try it on Pi or Openelec as of yet. Been pretty busy getting everything smoothed out and working on windows, And it is working wonderfully.

When you get a second, maybe couple thoughts. One thing I have noticed is that when you are in the Info screen for a library movie, The F3 button will read the description, but nothing else. Other Function buttons don't read anything. Is there anyway to get the function button to read everything on the screen, I.E. Actor, Genre, etc...

Also the function buttons depend on where you are located, As opposed having F1 through F4 buttons for various locations and information. I was wondering if at some point it wouldn't be possible to detect what screen you are on and just have a single F2 or something read the extra info. I do realize that's asking a lot and may not even be possible, However I thought Id put it out there.

Otherwise on the windows side things are working great. No crashing, reading wonderfully for what it has been reading, which is everything important. I can't wait to start playing fully with Pi and Openelec.

Jhall,
I still haven't gotten to the PVR side of things. I still need to get my TV mounted and WMC setup. So I don't have anything to compare for you.
Also, I have found the input of keyboard characters such as search a headache if you don't use a keyboard even for sighted folks.

On a side note,
I did start to work on a Youtube video demoing of the capabilities, And was considering a wiki page and drawing some more folks in. But I'm little unsure if its time yet. Especially with XBMC getting ready to realease Gotham here Real soon and some of the fixes being based on Gotham itself.


RE: Xbmc not working for blind users. - jhall - 2014-05-04

I wouldn't release it just yet. Admittedly I jumped to Helix build on accident because I wanted to be sure to get all the latest fixes, and xvdr plugin is not available for either Gotham or Helix in Windows, but I did notice that under windows when you start xbmc, for some reason it routes all the OSD and programming through the headphones and the jaws through the laptop speakers. That approach in theory makes sense, but is a little annoying in a quiet environment where people are trying to sleep.

Also I had to do a bit of research to get vnsi5 plugin for vdr going because a decision was made to remove the vdr side from the former vnsi3 git tree. It did compile for the version of vdr I am running ,s ono real headaches there.

It does seem to take several seconds to change channels on windows, not sure why--I mean 5 or more. In contrast, vdr can change the same channels in under 2 seconds.

In windows, while playing, I can press select button to cause the OSD to appear. arrowing to the right eventually brings me to an entry that is called the channel name (the one came from the vdr server) pressing select on that brings up the EPG. Cycling through the events using up and down (choosing events from nearby channels) you can press select and the event description shows up. I used jaws OCR recognission to get a fuzzy read on the description, and then arrowing right and enter to switch caused the channel to go away for 5 or more seconds, then resume the former channel not the new one that I had selected via movie. I'm thinking that one might be an XBMC bug.

Since this is my first real exposure to XBMC, I haven't really categorized my library yet, and all my vdr videos don't show up in the library. My movies are not aptly named either, so not sure what to do about that yet.

It's not a directly-related question to the plugin development, but does raise usability questions from a blind-person perspective for XBMC. Maybe we should start a separate thread for those kinds of issues? I personally wouldn't mind them thrown in here, but I don't want to presume.


RE: Xbmc not working for blind users. - ruuk - 2014-05-04

(2014-05-04, 04:47)Traker1001 Wrote: When you get a second, maybe couple thoughts. One thing I have noticed is that when you are in the Info screen for a library movie, The F3 button will read the description, but nothing else. Other Function buttons don't read anything. Is there anyway to get the function button to read everything on the screen, I.E. Actor, Genre, etc...

Also the function buttons depend on where you are located, As opposed having F1 through F4 buttons for various locations and information. I was wondering if at some point it wouldn't be possible to detect what screen you are on and just have a single F2 or something read the extra info. I do realize that's asking a lot and may not even be possible, However I thought Id put it out there.
F1 just speaks the current control, including the window name (and section name if applicable). This is in case you have for instance walked away and forgot where you were at.
F2 speaks window text. For instance if I have a window that has a header and some other info it speaks that stuff.
F3 speaks the current control info. For instance the info for the currently selected song.
F4 cancels speech.

Right now F2 and F3 aren't very consistent, because much of what I've done so far has been experimental while I've been trying to figure out various ways to get at the relevant text and information, but as soon as I finish with what I'm working on now, I'll get back getting things to work more predictably.

The reason I have separate buttons for window and control text is because I figure people don't want to hear all the information on the window when trying to just hear the info for a selected item. But as I said things aren't consistent just yet, so there is some overlap that makes the distinction unclear.
(2014-05-04, 04:47)Traker1001 Wrote: I did start to work on a Youtube video demoing of the capabilities, And was considering a wiki page and drawing some more folks in. But I'm little unsure if its time yet. Especially with XBMC getting ready to realease Gotham here Real soon and some of the fixes being based on Gotham itself.
Well, since your post Gotham was released Smile
I don't think there is anything wrong with getting started on any of that, I just wouldn't push too hard to get people yet until things are more ready for general use. If we get too many people here, I'll spend all my time answering questions instead of writing code Smile