• 1
  • 2
  • 3
  • 4(current)
  • 5
  • 6
  • 12
Video of latest xbmc code on Raspberry Pi
#46
My Test was with USB-Stick!
Reply
#47
Used to be able to OC to 1Ghz but after an update months ago (can´t remember with one caused it) my main RPi won't boot... max OC I can get it to boot is 950/450/450/6... using a USB 3.0 for storage and a 2GB SD.
 
  • Intel NUC Kit DN2820FYKH ~ Crucial DDR3L SO-DIMM 4GB ~ SanDisk ReadyCache 32GB SSD ~ Microsoft MCE model 1039 RC6 remote
Reply
#48
(2013-09-30, 11:56)xbs08 Wrote: Used to be able to OC to 1Ghz but after an update months ago (can´t remember with one caused it) my main RPi won't boot... max OC I can get it to boot is 950/450/450/6... using a USB 3.0 for storage and a 2GB SD.

Limited overclock may be down to the specific processor. It is often down to the power supply.
More overclock=>more power required=>more voltage drop from a poor supply.

It may be worth measuring the voltage when overclocked and under load (e.g. scrolling though views):
http://elinux.org/R-Pi_Troubleshooting#T...r_problems

or just try any other power supplies (and power supply cables) you have around.
Reply
#49
(2013-09-30, 01:35)dhead Wrote: Great achievement popcornmix.

Did you added a heatsink on top of the ram before overclocking.

No need for heatsinks. We believe 85C is the safe temperature limit (and overclock will disable if that is hit).
However I've never heard of anyone who's hit that (and I've deliberately tried).
Reply
#50
(2013-09-30, 00:24)tehnatural Wrote: I noticed you have readbufferfactor set to 10.0. Im also assuming that you do a nfs mount for your media and if so what is your rsize and wsize set at? Is there any reason to alter rsize and wsize with the newly added readbufferfactor? Im currently on a wired network so im not concerned with dropping packets so would larger packets mean increased performance? I personally noticed an increase in network performance switching to udp from tcp. Which would you suggest?

For me I get the best results with a OS mount of:
Code:
sudo mount 192.168.4.9:/Public -o _netdev,nfsvers=3,rw,intr,noatime,rsize=32768,wsize=32768,nolock,async,proto=udp /home/pi/dell

The video was just using the standard xbmc nfs access.
Reply
#51
(2013-09-30, 12:50)popcornmix Wrote:
(2013-09-30, 11:56)xbs08 Wrote: Used to be able to OC to 1Ghz but after an update months ago (can´t remember with one caused it) my main RPi won't boot... max OC I can get it to boot is 950/450/450/6... using a USB 3.0 for storage and a 2GB SD.

Limited overclock may be down to the specific processor. It is often down to the power supply.
More overclock=>more power required=>more voltage drop from a poor supply.

It may be worth measuring the voltage when overclocked and under load (e.g. scrolling though views):
http://elinux.org/R-Pi_Troubleshooting#T...r_problems

or just try any other power supplies (and power supply cables) you have around.

I'll try another power supply I have laying around.

btw test build running without issues Smile

Thanks
 
  • Intel NUC Kit DN2820FYKH ~ Crucial DDR3L SO-DIMM 4GB ~ SanDisk ReadyCache 32GB SSD ~ Microsoft MCE model 1039 RC6 remote
Reply
#52
@popcornmix - any thoughts on the question I posed in post#38 regarding moving up to 1080p artwork?
Texture Cache Maintenance Utility: Preload your texture cache for optimal UI performance. Remotely manage media libraries. Purge unused artwork to free up space. Find missing media. Configurable QA check to highlight metadata issues. Aid in diagnosis of library and cache related problems.
Reply
#53
(2013-09-29, 20:35)MilhouseVH Wrote: Now that the GUI can be run more easily at 1080p I'm just wondering if it's a good idea to re-cache fanart which has been converted from 1920x1080 to 1280x720 as a result of <fanartres>720</fanartres> in advancedsettings.xml.
I've never tried fanartres of 1080p. I don't think there will be any problems with performance, but I'm more nervous about memory usage.
Lets have a think about what's required:

Textures are generally lazily destroyed after 5 seconds of not being used.
A 32bpp 1080p texture requires 8MB on GPU. If you scroll through fanarts at, say, 2 per second then you may have 80MB of fanart textures.

The cover art tends to be even worse. If using Amber with widgets enabled (like in my video) there are 16 covers visible on Movies tab,
16 covers on TV tab, and 8 covers on favourites. (plus the skin background).

The widgets don't get freed when entering the libraries views. So if you no go into the movies view and the thumbnail view, there are 10 covers visible.
It also keeps one row above and below in memory, so there are 20 covers loaded (there is actually one extra too, probabably due to the ".." placeholder).

So now we have 21+16+16+8 = 61 covers in GPU memory.

Plus the number we can load in 5 seconds (maybe an extra 50), so 121 covers.

In my video we had imageres=512 (*), so they are 352x512 = 720K each, or 87MB.

So, if you add in the last 5 seconds of fan art. We may have up to 5 encodes/decodes in progress requiring additional decoded jpeg buffers (in YUV format).
There's the 1080p framebuffer (triple buffered) is another 24MB.
Code and data for general GPU use (maybe 16MB). There's various control lists for 3D hardware and compiled shader code (maybe 16MB)

So, you are now above 220MB of GPU memory. With gpu_mem=256 you might be okay but it could be pretty tight.

If you were to find a skin that offered a fanart thumbnail view, then you could see all bets are off.

There's also the slightly unpleasant possibility that you are decoding a 1080p video (with 8 reference frames) and you hit tab and then start scrolling your movie wall
(fortunately that scenerio is probably sluggish enough that you won't scroll very quickly through your movies).

This has been assuming 32bpp textures. You can run with 16bpp textures (and that's the default, switchable in GUI), which will halve much of this memory use.
The nice thing about 16bpp textures is it is purely transient. It doesn't affect texture cache.
In theory it could be adjusted dynamically (i.e. switch to 16bpp when gpu memory is 3/4 full).

So. My current thought is:
512M Pi: 1080p gui. 32bpp textures. imageres=512. fanartres=720. gpu_mem=256
256M Pi: 1080p gui. 16bpp textures. imageres=512. fanartres=720. gpu_mem=128

A custom option that you would probably get away with is fanartres=1080, but there may be skins or use cases that break that.

(2013-09-29, 20:35)MilhouseVH Wrote: Setting <fanartres> to 1080 should result in more-or-less original resolution fanart, shouldn't it? (It actually results in 1920x1088 artwork, is that slightly unusual height anything to worry about?)
Yes, 1080 is actually an inconvenient value. Videos are encoded with a height of 1088, and 8 pixels are cropped (as macroblocks are typically 16x16).
Textures are generally padded to multiples of 16, and slower code is involved when they don't match that.
However the world is quite different now, and this sort of code (https://github.com/xbmc/xbmc/blob/master...re.cpp#L95) is not really involved, so I need to step through the code and find out it a less aligned height does actually cause any problems.
(I'm suspecting at the moment that the texture cache images are resized to 1920x1088 and then gets resized to 1920x1080 on screen, so may have a small aspect ratio error, and more resize smoothing that necesary).
Reply
#54
Many thanks for the details explanation. Would destroying textures more frequently be of any benefit, say 2-3 seconds rather than 5? Perhaps keep 5 seconds for 720 (or less), and use a lower (faster) value for >720, unless there's a performance downside of course.
Texture Cache Maintenance Utility: Preload your texture cache for optimal UI performance. Remotely manage media libraries. Purge unused artwork to free up space. Find missing media. Configurable QA check to highlight metadata issues. Aid in diagnosis of library and cache related problems.
Reply
#55
(2013-09-30, 20:24)MilhouseVH Wrote: Many thanks for the details explanation. Would destroying textures more frequently be of any benefit, say 2-3 seconds rather than 5? Perhaps keep 5 seconds for 720 (or less), and use a lower (faster) value for >720, unless there's a performance downside of course.

Yes, when tracking down the texture leak bug (https://github.com/xbmc/xbmc/pull/3331) I set the timeout to zero, and didn't see much change.
Change the 5000 to 0 (or something else) is you want to test it:
https://github.com/xbmc/xbmc/blob/master....cpp#L5002

I guess it only helps when you backtrack. i.e. scroll too far and then go back (although one row off each end of screen is cached anyway).
Reply
#56
(2013-09-30, 22:42)popcornmix Wrote:
(2013-09-30, 20:24)MilhouseVH Wrote: Many thanks for the details explanation. Would destroying textures more frequently be of any benefit, say 2-3 seconds rather than 5? Perhaps keep 5 seconds for 720 (or less), and use a lower (faster) value for >720, unless there's a performance downside of course.

Yes, when tracking down the texture leak bug (https://github.com/xbmc/xbmc/pull/3331) I set the timeout to zero, and didn't see much change.
Change the 5000 to 0 (or something else) is you want to test it:
https://github.com/xbmc/xbmc/blob/master....cpp#L5002

I guess it only helps when you backtrack. i.e. scroll too far and then go back (although one row off each end of screen is cached anyway).

Thanks. I'll re-cache my fanart at 1080 and see if 5 seconds is a problem (over NFS, still waiting for the USB3 memory stick). I'll then use a delay of 0 and see if I notice any difference. One thing that occurs to me is that, if GPU memory is limited, couldn't textures simply be evicted from GPU RAM as required? Then you could have a longer lazy eviction time, but also evict on demand if/when required to free up RAM.
Texture Cache Maintenance Utility: Preload your texture cache for optimal UI performance. Remotely manage media libraries. Purge unused artwork to free up space. Find missing media. Configurable QA check to highlight metadata issues. Aid in diagnosis of library and cache related problems.
Reply
#57
(2013-10-01, 00:04)MilhouseVH Wrote: Thanks. I'll re-cache my fanart at 1080 and see if 5 seconds is a problem (over NFS, still waiting for the USB3 memory stick). I'll then use a delay of 0 and see if I notice any difference. One thing that occurs to me is that, if GPU memory is limited, couldn't textures simply be evicted from GPU RAM as required? Then you could have a longer lazy eviction time, but also evict on demand if/when required to free up RAM.

There is a "discardable" flag on GPU buffers which means it can be discarded if another allocation will fail.
I'm not sure if that will produce a blank texture or random corruption when it attempts to render (or something worse!). May be worth testing.
It won't be ideal, as xbmc will believe the texture is still valid, so will never reupload it, so it won't recover as memory is freed up.

A better scheme might be for xbmc to query free GPU memory on each texture upload. If it's too low, then kick the texture delete thread with a timeout of 0.
But, this is getting a bit too pi specific/intrusive, and so hard to get accepted into xbmc.
Reply
#58
I re-cached (using texturecache.py) all my movie fanart (almost all of which is 1920x1080) with <fanartres> set to 1080.

With 256/256 GPU split, 1080p GUI and 32bpp textures, all the latest newclock3 patches and using next firmware (24 Sep f7cc4e449a45c6818095ce9ac87ee414290ea9a7), re-caching of the fanart went well although I did get the following:

Code:
23:33:50 T:2980050000   ERROR: COMXCoreComponent::DecoderEventHandler OMX.broadcom.image_decode - OMX_ErrorStreamCorrupt, Bitstream corrupt
23:33:51 T:2812277840   ERROR: COMXCoreComponent::WaitForOutputDone OMX.broadcom.image_encode wait event timeout
23:33:51 T:2980050000   ERROR: COMXCoreComponent::DecoderEventHandler OMX.broadcom.image_decode - OMX_ErrorStreamCorrupt, Bitstream corrupt
23:33:52 T:2812277840   ERROR: COMXCoreComponent::WaitForOutputDone OMX.broadcom.image_encode wait event timeout
23:33:52 T:2980050000   ERROR: COMXCoreComponent::DecoderEventHandler OMX.broadcom.image_decode - OMX_ErrorStreamCorrupt, Bitstream corrupt
23:33:53 T:2812277840   ERROR: COMXCoreComponent::WaitForOutputDone OMX.broadcom.image_encode wait event timeout
23:33:54 T:2980050000   ERROR: Previous line repeats 1 times.
23:33:54 T:2980050000   ERROR: COMXCoreComponent::DecoderEventHandler OMX.broadcom.image_decode - OMX_ErrorStreamCorrupt, Bitstream corrupt
23:33:55 T:2812277840   ERROR: COMXCoreComponent::WaitForOutputDone OMX.broadcom.image_encode wait event timeout
23:33:56 T:2980050000   ERROR: Previous line repeats 1 times.
23:33:56 T:2980050000   ERROR: COMXCoreComponent::DecoderEventHandler OMX.broadcom.image_decode - OMX_ErrorStreamCorrupt, Bitstream corrupt
23:33:57 T:2812277840   ERROR: COMXCoreComponent::WaitForOutputDone OMX.broadcom.image_encode wait event timeout
23:34:14 T:2779251792   ERROR: Previous line repeats 16 times.
23:34:14 T:2779251792  NOTICE: Thread JobWorker start, auto delete: true
23:34:15 T:2812277840   ERROR: COMXCoreComponent::WaitForOutputDone OMX.broadcom.image_encode wait event timeout
23:35:12 T:2762474576   ERROR: Previous line repeats 57 times.
23:35:12 T:2762474576  NOTICE: Thread JobWorker start, auto delete: true
23:35:13 T:2812277840   ERROR: COMXCoreComponent::WaitForOutputDone OMX.broadcom.image_encode wait event timeout
23:37:37 T:3043263296   ERROR: Previous line repeats 144 times.

and two fanart items failed to cache entirely - Alien and Spider-man 3.

It appears that Alien was partially cached - XBMC created a row in Textures13.db, but didn't create an image file in the Thumbnail folder. The only errors written to the log are those above.

Running "c movies", Spider-man 3 failed to re-cache with more "Bitstream corrupt" and "timeout" errors in the log.

I rebooted, and ran "c movies" once more, but still the Spider-man 3 fanart failed to cache, so maybe there is something "odd" about this JPG that causes the new decode/encode algorithm a problem? Viewing the uncached Spider-man 3 fanart item in the GUI results in the same "Bitstream corrupt" and "timeout" errors.

Removing the database row for the Alien fanart from Textures13.db allowed me to re-cache this fanart successfully, so maybe it was just a one-off issue, however Spider-man 3 is 100% repeatable. The Alien fanart does have a slightly odd resolution - 1980x1080 and it's not the highest quality - but that could be just coincidental.

I've uploaded the original fanart files to Dropbox: Alien and Spider-man 3.

The debug log when re-caching just Spider-man 3 is here (pastebin). Note that texturecache.py will try to download the artwork from XBMC three times before giving up. The logging of the "wait event timeout" message also continues for what seems like forever - I uploaded the log about 5 minutes after the last download request occurred at 00:05:03, but 15 minutes later (00:20:00) the logging of timeout events continues.

On to the decode (GUI) side of things... with the standard 5 second texture deletion timeout, I am able - with an IR remote control - to scroll rapidly through the Movies library in Fanart view (Amber skin, "Show Info" enabled) without any display related problems, although there are a *lot* of these messages in the log:
Code:
23:39:52 T:2812277840   ERROR: COMXCoreComponent::WaitForOutputDone OMX.broadcom.image_encode wait event timeout
23:40:14 T:2737308752   ERROR: Previous line repeats 21 times.
23:40:14 T:2737308752  NOTICE: Thread JobWorker start, auto delete: true
23:40:14 T:2812277840   ERROR: COMXCoreComponent::WaitForOutputDone OMX.broadcom.image_encode wait event timeout
23:40:47 T:2737308752   ERROR: Previous line repeats 33 times.
23:40:47 T:2737308752  NOTICE: Thread BackgroundLoader start, auto delete: false
23:40:47 T:2728920144  NOTICE: Previous line repeats 1 times.
23:40:47 T:2728920144  NOTICE: Thread JobWorker start, auto delete: true
23:40:48 T:2812277840  NOTICE: Previous line repeats 1 times.
23:40:48 T:2812277840   ERROR: COMXCoreComponent::WaitForOutputDone OMX.broadcom.image_encode wait event timeout
23:40:51 T:2868384848   ERROR: Previous line repeats 3 times.
23:40:51 T:2868384848  NOTICE: Thread JobWorker start, auto delete: true
23:40:52 T:2812277840   ERROR: COMXCoreComponent::WaitForOutputDone OMX.broadcom.image_encode wait event timeout
23:40:52 T:2737308752   ERROR: COMXCoreComponent::GetInputBuffer OMX.broadcom.image_decode wait event timeout
23:40:53 T:2712142928  NOTICE: Thread JobWorker start, auto delete: true
23:40:53 T:2812277840   ERROR: COMXCoreComponent::WaitForOutputDone OMX.broadcom.image_encode wait event timeout
23:40:54 T:2737308752   ERROR: COMXCoreComponent::GetInputBuffer OMX.broadcom.image_decode wait event timeout
23:40:54 T:2812277840   ERROR: COMXCoreComponent::WaitForOutputDone OMX.broadcom.image_encode wait event timeout
23:40:54 T:2728920144   ERROR: COMXCoreComponent::GetInputBuffer OMX.broadcom.image_decode wait event timeout
23:40:55 T:2812277840   ERROR: COMXCoreComponent::WaitForOutputDone OMX.broadcom.image_encode wait event timeout
23:40:55 T:2868384848   ERROR: COMXCoreComponent::GetInputBuffer OMX.broadcom.image_decode wait event timeout
23:40:56 T:2812277840   ERROR: COMXCoreComponent::WaitForOutputDone OMX.broadcom.image_encode wait event timeout
23:40:57 T:2712142928   ERROR: COMXCoreComponent::GetInputBuffer OMX.broadcom.image_decode wait event timeout
23:40:57 T:2812277840   ERROR: COMXCoreComponent::WaitForOutputDone OMX.broadcom.image_encode wait event timeout
...

so the timeout setting could certainly do with tweaking (this is with Thumbnails mounted over NFS).

I'll rebuild with the texture deletion timeout set to 0, but I'm not expecting it to make much if any difference since I didn't see any memory related problems with it set to 5 seconds.
Texture Cache Maintenance Utility: Preload your texture cache for optimal UI performance. Remotely manage media libraries. Purge unused artwork to free up space. Find missing media. Configurable QA check to highlight metadata issues. Aid in diagnosis of library and cache related problems.
Reply
#59
With g_TextureManager.FreeUnusedTextures(0) I don't really see an obvious performance difference when browsing rapidly through Movies and displaying different 1080p fanart.

There were a few OMX errors however, that are not timeout related:
Code:
01:36:59 T:2902631504   ERROR: COMXCoreComponent::SetStateForComponent - OMX.broadcom.image_decode failed with omx_err(0x80001000)
01:36:59 T:2902631504   ERROR: COMXCoreComponent::SetStateForComponent - OMX.broadcom.image_decode failed with omx_err(0x80001000)
...
01:37:53 T:2902631504   ERROR: COMXCoreComponent::WaitForCommand OMX.broadcom.image_decode wait timeout event.eEvent 0x00000000 event.command 0x00000002 event.nData2 320
01:37:53 T:2902631504   ERROR: COMXCoreComponent::DisableAllPorts WaitForCommand error on component OMX.broadcom.image_decode port 320 omx_err(0x7fffffff)
01:37:53 T:2902631504   ERROR: COMXCoreComponent::Initialize - error disable ports on component OMX.broadcom.image_decode omx_err(0x7fffffff)
...
01:37:55 T:2902631504   ERROR: COMXCoreComponent::WaitForCommand OMX.broadcom.image_decode wait timeout event.eEvent 0x00000000 event.command 0x00000000 event.nData2 2
01:37:55 T:2902631504   ERROR: COMXCoreComponent::WaitForCommand - OMX.broadcom.image_decode failed with omx_err(0x7fffffff)
01:37:55 T:2902631504   ERROR: COMXTexture::Decode - Error alloc buffers  (80001012)

It seems that a 512MB Pi with 256/256 split is capable of handling 1080p artwork and a 1080p/32bpp GUI with the default 5 seconds texture deletion interval, so reducing this deletion interval doesn't seem necessary unless there's a better way to test.
Texture Cache Maintenance Utility: Preload your texture cache for optimal UI performance. Remotely manage media libraries. Purge unused artwork to free up space. Find missing media. Configurable QA check to highlight metadata issues. Aid in diagnosis of library and cache related problems.
Reply
#60
(2013-10-01, 01:55)MilhouseVH Wrote: and two fanart items failed to cache entirely - Alien and Spider-man 3.

Spider-man 3 is actually very unusual. It is a jpeg encoded with CMYK colourspace (jpegs are generally YUV).

jpegsnoop reports:
"NOTE: Scan parsing doesn't support CMYK files yet."

QuickTime displays it with incorrect colours. I think that is just a weird file and you'd be best to convert it or download a different version.

====edit====

Also xbmc's slow jpeg decoder reports:
Code:
16:26:04 T:2922026048 WARNING: JpegIO: Error 28: Unsupported color conversion request
so it's not entirely happy.

However, the file is useful, as we weren't quitting out as early as possible.
I've now made the first sign of "bitstream corrupt" quit out, and I parse the number of colour components to reject CMYK before sending data to GPU.

Alien is working every time for me (when decoded on its own).
I imagine the current timeouts only occur in complex scenarios (mutliple concurrent jpegs being encoded/decoded).
If you have a repeatable failure I'd be interested.
Reply
  • 1
  • 2
  • 3
  • 4(current)
  • 5
  • 6
  • 12

Logout Mark Read Team Forum Stats Members Help
Video of latest xbmc code on Raspberry Pi6