I didn't have debugging turned on when I had the problem. Just reading a file that big is difficult, particularly when the filesystem is full. Deletion without seeing what is there was the only solution for me (Believe me I have been using linux for years [redhat 4.x] and know my way around)
The problem didn't repeat. But it could.
logrotate might not be the correct tool for something that grows very quickly when a problem arises. You could run it, say, every 30 minutes using the size criteria:
Code:
size size
Log files are rotated only if they grow bigger then size bytes. If size is followed by k, the size is assumed to be in kilobytes. If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100Gare all valid.
However there are plenty of file monitoring tools that could be used. Hell a daemon that did nothing but (pseudo code)
while true
if size (~use/.xbmc/temp/xbmc.log) > 2GB then (take some action)
repeat
would be pretty easy to write.
But harder to make cross platform.