2014-01-25, 05:18
Thanks for the database. I think you're hitting an out of memory (OOM) situation, rather than any encoding issues as I first thought.
On my Pi with your Textures13.db I ran:
which dumps the contents of Textures13.db to the console, and with bcmstat.sh running in a separate window it becomes pretty clear that memory is the issue.
The problem is that JSON is now being used to access the texture cache database by default in recent builds, and in your case this means almost 60,000 rows are being loaded into memory objects by XBMC (~110MB), these objects are then transferred using JSON to texturecache.py where they're again stored in memory. Combined, this is a lot of data and it exhausts the available physical memory.
With 128MB swap on my 512MB Pi (256MB/256MB split) it didn't crash, but it's still running after 15 minutes and the Pi is unresponsive (though still responding to pings). Allocating more memory to the ARM might help make it run more quickly, even complete, until eventually more database rows mean that memory allocation isn't sufficient, so it's not really a long term solution.
You do however have two options:
1) Since you're running this directly on the Pi, you can disable JSON for Textures database access with @dbjson=no and then SQLite direct access will be used instead of JSON
2) Run texturecache.py on a remote PC and not the Pi, so that the memory usage on the Pi is significantly reduced (shouldn't be necessary to disable dbjson for this, although you can but it then means you have to mount the userdata folder on the remote PC).
#1 is likely to be the easiest for you. I may give some thought to making dbjson=no the default when being run locally, but dbjson=yes the default for remote access.
On my Pi with your Textures13.db I ran:
Code:
./texturecache.py x
The problem is that JSON is now being used to access the texture cache database by default in recent builds, and in your case this means almost 60,000 rows are being loaded into memory objects by XBMC (~110MB), these objects are then transferred using JSON to texturecache.py where they're again stored in memory. Combined, this is a lot of data and it exhausts the available physical memory.
With 128MB swap on my 512MB Pi (256MB/256MB split) it didn't crash, but it's still running after 15 minutes and the Pi is unresponsive (though still responding to pings). Allocating more memory to the ARM might help make it run more quickly, even complete, until eventually more database rows mean that memory allocation isn't sufficient, so it's not really a long term solution.
You do however have two options:
1) Since you're running this directly on the Pi, you can disable JSON for Textures database access with @dbjson=no and then SQLite direct access will be used instead of JSON
2) Run texturecache.py on a remote PC and not the Pi, so that the memory usage on the Pi is significantly reduced (shouldn't be necessary to disable dbjson for this, although you can but it then means you have to mount the userdata folder on the remote PC).
#1 is likely to be the easiest for you. I may give some thought to making dbjson=no the default when being run locally, but dbjson=yes the default for remote access.