Hyram Wrote:No, you haven't misunderstood. I feel that with most rendering duties being shunted off to the GPU (meaning there is CPU clock-cycles to spare by the metric tonne) and the prevalence of gigabit ethernet in the home LAN, any performance hit incurred by reading metadata directly instead of a database would be negligible.
You feel or you know the performance hit would be negligible? I can't imagine why you would think that reading (potentially) thousands of xml files wouldn't be massively slower than queering a single database. Databases are designed for optimum retrieval of large amounts of data.
Here's a scenario; let's say I'm looking at the movie info for Armageddon and in the cast list I click on Bruce Willis to see all the movies I have that he is in. I currently have ~975 movies and it takes XBMC around 1-2 seconds to return the list of movies starring Bruce Willis from the database. Imagine how long it would take to scan the xml files for all 975 movies which are spread out across 8 hard drives most of which are spun down looking for the 12 or so I may have with Bruce Willis in them. Every time you want to browse movies by genre XBMC would have to read every xml file again just to generate the list of genres. And what about data that isn't stored in the xml file? Do you ever use the resume play feature in XBMC? That data is stored only in the database, not in the XML file.
I can't see a scenario where doing away with the database would be a good idea and I bet that it will never happen.