• 1
  • 99
  • 100
  • 101(current)
  • 102
  • 103
  • 128
Linux VAAPI: Nuc, Chromebox, HSW, IVB, Baytrail with Ubuntu 14.04
I followed the guide and the memory leak issue went away. On comparing the two systems I found out that on streaming iptv channels using 'udpxy' there is an issue of memory leak in kodi. If we dont use udpxy then there is no memory leak. Just sharing this information.

If we try to play a iptv stream which does not exist or is temporarily down then the player waits for 30 Seconds until timeout is reached. Is there a way to reduce the 30 Seconds timeout in case when there is no stream ?
Thank you.
Core i3 3120M | 2GB Ram | 16GB SSD
Is there anyway to delay the start of XBMC/KODI to after this event?
Code:
INFO: LIRC Connect: successfully started
(from ~/.kodi/temp/kodi.log)

I am using the startup script from post 1.
Sadly lirc uses init.d to start and not upstart - see here: https://bugs.launchpad.net/mythbuntu/+bug/563139 for a lengthy workaround.

Edit: You need to remove the init.d way of starting lirc and using the upstart script instead.
First decide what functions / features you expect from a system. Then decide for the hardware. Don't waste your money on crap.
(2015-03-24, 22:17)fritsch Wrote: Sadly lirc uses init.d to start and not upstart - see here: https://bugs.launchpad.net/mythbuntu/+bug/563139 for a lengthy workaround.

Edit: You need to remove the init.d way of starting lirc and using the upstart script instead.

Thanks for the link.

I tried to follow the suggestions but something went wrong. The two startup scripts I am using:

Code:
xbmc@xbmc:~$ cat /etc/init/lirc.conf
description "Lirc"
author "Rune"

start on local-filesystems
stop on starting shutdown

expect fork
#respawn

pre-start script
if [ ! -d "/var/run/lirc" ]; then
mkdir -p "/var/run/lirc"
fi
end script

script
exec /usr/sbin/lircd --output=/var/run/lirc/lircd --driver=commandir
rm -f /dev/lircd && ln -s /var/run/lirc/lircd /dev/lircd
emits lirc-started
end script

post-stop script
       [ -h "/var/run/lirc/lircd" ] && rm -f /var/run/lirc/lircd
end script

My xbmc upstart script:
Code:
xbmc@xbmc:~$ cat /etc/init/xbmc.conf
# xbmc-upstart
# starts XBMC on startup by using xinit.
# by default runs as xbmc, to change edit below.
env USER=xbmc

emits xbmc-started
description     "XBMC-barebones-upstart-script"
author          "Matt Filetto"

start on (filesystem and stopped udevtrigger and started lirc)
stop on runlevel [016]

# tell upstart to respawn the process if abnormal exit
respawn
respawn limit 10 5
limit nice 21 21

script
exec su -c "xinit /usr/bin/xbmc --standalone :0" $USER
end script

If I remove lirc from /etc/init.d/ then Kodi never finds the /dev/lircd
Code:
01:53:32 T:139824437864192    INFO: LIRC Process: using: /dev/lircd
01:53:32 T:139824437864192    INFO: LIRC Connect: connect failed: No such file or directory

but if I leave lirc in /etc/init.d/ then Kodi boots up before LIRC is ready and I have to restart Kodi. So what is the correct way to "disable LIRC the init.d way"?

PS: If this is off-topic, let me know and I'll use another tread.
http://manpages.ubuntu.com/manpages/hard...c.d.8.html <-
First decide what functions / features you expect from a system. Then decide for the hardware. Don't waste your money on crap.
And yeah you are fully off topic -> ubuntuforums.org is the correct place to ask as its their init foobar.
First decide what functions / features you expect from a system. Then decide for the hardware. Don't waste your money on crap.
https://bugs.freedesktop.org/show_bug.cgi?id=82349#c8 <- this fixes the EDID issue - you need to patch some recent kernel and all will be fine :-)
First decide what functions / features you expect from a system. Then decide for the hardware. Don't waste your money on crap.
Here is a kernel, that includes the byt fix and the EDID fix:

https://dl.dropboxusercontent.com/u/5572..._amd64.deb
https://dl.dropboxusercontent.com/u/5572..._amd64.deb

Much fun with testing. Source is at the usual location ...
First decide what functions / features you expect from a system. Then decide for the hardware. Don't waste your money on crap.
(2015-03-19, 20:58)Ney Wrote:
(2015-03-19, 20:22)fritsch Wrote: Good news.

Intel fixed MCDI / MADI for IVB, SNB, BYT - as we speak wsnipex is building new packages. They should go through the normal ppa with version: 1.5.1~pre1

Very interesting, is there any inclanation that the weaker BYT units can handle mcdi/madi, or is that wait and see?

Been testing on my Baytrail ASrock Q1900DC-ITX with 40 mbit/s h264 1080i30 made from SVT test sequences.

ftp://vqeg.its.bldrdoc.gov/HDTV/

Edit : above link seems to get mungled and broken this (hopefully) is the correct one -

Code:
ftp://vqeg.its.bldrdoc.gov/HDTV/

For madi/mcdi it just works without any overlay, with I get skips gputop looks like -

Code:
render busy:  91%: ██████████████████▎                    render space: 165/131072
                bitstream busy:  13%: ██▋                                 bitstream space: 8/131072
                  blitter busy:  10%: ██                                    blitter space: 4/131072

                          task  percent busy
                           GAM:  90%: ██████████████████      vert fetch: 46057249 (240/sec)
                           TSG:  45%: █████████               prim fetch: 23019665 (120/sec)
                           VFE:  45%: █████████            VS invocations: 46046422 (240/sec)
                           TDG:  45%: █████████            GS invocations: 0 (0/sec)
                            VF:  45%: █████████                 GS prims: 0 (0/sec)
                          GAFS:  31%: ██████▎              CL invocations: 23001746 (120/sec)
                          GAFM:   0%:                           CL prims: 21592846 (120/sec)
                            VS:   0%:                      PS invocations: 974156554576 (124588800/sec)
                            CL:   0%:                      PS depth pass: 968796718653 (124416000/sec)
                           SVG:   0%:                      
                            HS:   0%:
Below is the same sequence but 40mbit 1080p60 so no deinterlace
Code:
render busy:  50%: ██████████                             render space: 30/131072
                bitstream busy:  15%: ███                                 bitstream space: 12/131072
                  blitter busy:   8%: █▋                                    blitter space: 3/131072

                          task  percent busy
                           GAM:  53%: ██████████▋             vert fetch: 46035576 (240/sec)
                          GAFS:  11%: ██▎                     prim fetch: 23008850 (120/sec)
                            VF:   0%:                      VS invocations: 46024746 (240/sec)
                            VS:   0%:                      GS invocations: 0 (0/sec)
                                                                GS prims: 0 (0/sec)
                                                           CL invocations: 22990974 (120/sec)
                                                                CL prims: 21582030 (120/sec)
                                                           PS invocations: 962966918224 (124588800/sec)
                                                           PS depth pass: 957628309053 (124416000/sec)

The SVT I have are pretty much all panning - so visually you don't really get a chance to see the difference between bob - madi/mcdi do display 1 pix static weave correctly, on something like -

http://www.w6rz.net/vertrez1080.zip (FWIW IIRC from years ago some w6rz streams have the wrong flag for field order - doesn't matter on this one though)

Currently madi/mcdi doesn't quite work properly in the it's not field rate. This patch on top of

https://github.com/fritsch/libva-intel-driver.git ppa-new-1.5.0 branch

http://xbmclogs.com/paqmnxn6y/fefnv4/raw

"works for me"

field rate mcdi on burosch letters has a few artifacts that madi doesn't. On only one, out of eight SVT samples I have, can I see a very slight artifact with mcdi. Other HDTV I tried so far looks the same with both to me.
Most of the speed is sadly eaten by the full rgb conversion in vaPutSurface :-( We really hope that with EGL, we can avoid that and then have more performance available for VPP lanczo3 or that MCDI
First decide what functions / features you expect from a system. Then decide for the hardware. Don't waste your money on crap.
Ah and btw. can you send the above patch on the bugtracker? So that gwenole and haihao can have a look?
First decide what functions / features you expect from a system. Then decide for the hardware. Don't waste your money on crap.
Feedback for your patch: With that the flickering is gone. Burosch does not look that "pulsing" anymore.
First decide what functions / features you expect from a system. Then decide for the hardware. Don't waste your money on crap.
Thanks for the feedback, I sent the patch to FDO.

I notice Gwenole has a new commit on FDO intel-driver which looks to make some things more efficient including vaapi-bob, but I guess not m*di.

I haven't tried that yet - from memory gpu top with vaapi-bob currently is 80% for 1080i30.

So will egl let you read back yuv surfaces post decode/processing?
yes, there is an API for that.
First decide what functions / features you expect from a system. Then decide for the hardware. Don't waste your money on crap.
Nice Work the dev have made, now i can watch 1080i(14mbit mpeg4 mbaff) with hardware deinterlacing(vaapi-bob) on my j1800 baytrail soc, so cpu from 60% on both cores to 10% usage on both cores.
  • 1
  • 99
  • 100
  • 101(current)
  • 102
  • 103
  • 128

Logout Mark Read Team Forum Stats Members Help
VAAPI: Nuc, Chromebox, HSW, IVB, Baytrail with Ubuntu 14.0416