Wayland support

  Thread Rating:
  • 1 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Post Reply
yolk Offline
Junior Member
Posts: 11
Joined: Mar 2017
Reputation: 0
Post: #16
(2017-03-21 14:33)RockerC Wrote:  
(2017-03-21 00:24)yolk Wrote:  Ah OK - I didn't look specifically into VDPAU. But if that's clear there is no reason to include it in the proposal, so I've removed it.
Could substitute VDPAU with VAAPI (VA API)? http://en.wikipedia.org/wiki/Video_Acceleration_API
Well, VA-API is part of the proposal (for the cards/drivers that support it).
find quote
daniels Offline
Junior Member
Posts: 3
Joined: May 2017
Reputation: 0
Post: #17
Congratulations on being accepted this year! I'm one of the Wayland developers, and would be more than happy to help and answer any questions you have, either here or as 'daniels' on Freenode.

Using wp_presentation would be really great. Unfortunately it's still not implemented in Mutter for boring internal infrastructural reasons, but I'd expect that to change soon.

Native YUV support would also be really nice to have, though again, Mutter doesn't yet support it. :\ The news article linked to is very old, which mainly describes work with the wl_drm protocol, something which is an internal implementation detail of Mesa and thus not supported on other platforms. We do also reserve the right to break that interface, so please don't rely on it! Instead, I'd be looking more towards the zwp_linux_dmabuf_v1 interface, which is again supported by Weston, and will be supported by Mutter within the next couple of months I hope.

Using dmabuf directly allows you to be in full control of the presentation layer, not having to route around EGL/VA-API's own presentation support. If this is a viable option then I'd personally be very happy to see it, though I wouldn't be surprised if this required a fair bit of rework on Kodi, so please don't necessarily take this as good advice. Smile This is the route VA-API is going down as well: allowing clients to access dmabuf handles directly and drive presentation themselves.

Best of luck with your summer, and please get in touch if I can help at all!
find quote
FernetMenta Offline
Team-Kodi Developer
Posts: 6,190
Joined: Jul 2010
Reputation: 140
Location: Munich
Post: #18
Kodi's presentation component is independent from decoding APIs like VAAPi or VDPAU. Kodi renders GL textures. The requirement on a windowing system is that Kodi needs to know when a texture rendered through GL becomes visible on screen.
Kodi also needs to sync its clock with vertical retrace aka vblank. The windowing system should provide some mechanism to do so.
find quote
daniels Offline
Junior Member
Posts: 3
Joined: May 2017
Reputation: 0
Post: #19
(2017-05-06 16:13)FernetMenta Wrote:  Kodi's presentation component is independent from decoding APIs like VAAPi or VDPAU. Kodi renders GL textures. The requirement on a windowing system is that Kodi needs to know when a texture rendered through GL becomes visible on screen.

That's reasonable. Bearing in mind that this isn't the bad old days of destructive X11 compositors though, I would be interested to explore direct buffer attachment with you guys. In other words, not using the GPU (via GL) to copy the buffers and do conversion, but offload that to the compositor, where we can potentially use the display controller overlays to do so, with no copying. Our colour management isn't quite up to scratch for perfectly lossless pipelines yet, but we're hoping to make it so in the fairly near future. Having at least a pathway to an implementation in Kodi would be a great testbed for this. This usually gives quite good filtering/scaling/conversion through the overlays, as well as helping bring down power usage, and maybe even bring up performance thanks to reduced memory bandwidth. Anyway, one for the future.

(2017-05-06 16:13)FernetMenta Wrote:  Kodi also needs to sync its clock with vertical retrace aka vblank. The windowing system should provide some mechanism to do so.

Hm, vblank itself is not quite what you want to be synchronising to. The kernel needs to submit its final configuration to the hardware just before vblank. In turn, the compositor needs to submit its final display configuration to the kernel, before the kernel reaches that critical timing point. In turn, the client needs to submit its final frame to the compositor, a bit before the compositor submits to the kernel. The exact mechanics of the frame timings are a little fluid, but the point is that 'vblank' is a very specific term, where what you want is more like the critical presentation commit point, wherever that may lie. (For Weston's default configuration, this is 7ms before vblank.)
find quote
FernetMenta Offline
Team-Kodi Developer
Posts: 6,190
Joined: Jul 2010
Reputation: 140
Location: Munich
Post: #20
1) presentation
For this thread we stay with GL rendering and our own color management. We have things like 3dlut and I think doing this at the app level leaves us with more options.

Rendering video in paralled to GL may be an option for an additional path. As you said, something for the futur.

2) vblank
We don't use this for precise timing but to sync the clock. We allow an variance of 30% and also have some error correction if we miss a vblank. The point here is that display fps never matches video frame rate to 100% and if you sync to system clock sooner or later the error gets bigger than frametime. Dropping a frame or duplicating one causes noticeable stutter.
(This post was last modified: 2017-05-06 18:14 by FernetMenta.)
find quote
daniels Offline
Junior Member
Posts: 3
Joined: May 2017
Reputation: 0
Post: #21
(2017-05-06 18:13)FernetMenta Wrote:  1) presentation
For this thread we stay with GL rendering and our own color management. We have things like 3dlut and I think doing this at the app level leaves us with more options.

Rendering video in paralled to GL may be an option for an additional path. As you said, something for the futur.
Totally understand. Smile

(2017-05-06 18:13)FernetMenta Wrote:  2) vblank
We don't use this for precise timing but to sync the clock. We allow an variance of 30% and also have some error correction if we miss a vblank. The point here is that display fps never matches video frame rate to 100% and if you sync to system clock sooner or later the error gets bigger than frametime. Dropping a frame or duplicating one causes noticeable stutter.
I see what you mean, but I'm not sure I agree with the sentiment. Like I say, the deadline by which you need to present for a given frame, will always fall at a certain point within CPU time. The compositor decides when it will repaint for the next frame, and it is this deadline the app must meet. Using vblank is one way to get this information, but not the only way and not necessarily entirely correct. I feel it is useful in that sense to avoid talking about vblank (in the very specific CRT-derived timing sense) unless you truly mean the actual display device, but this is primarily useful for obtaining time-to-light information for A/V sync, rather than trying to derive the deadline you must meet to submit your next frame.
find quote
FernetMenta Offline
Team-Kodi Developer
Posts: 6,190
Joined: Jul 2010
Reputation: 140
Location: Munich
Post: #22
Not sure we are talking about the same thing. We use vblank information not for a/v sync but for adjusting playback speed to the display. Kodi does not need to know whether refresh rate is exaclty 23.976 or 24.0 or someting slightly off. If you play some 23.976 material on a 24Hz display it will run sllightly faster but smooth. I am not talking about presentation time. Kodi needs to get the beat of vertical retrace.
find quote
yolk Offline
Junior Member
Posts: 11
Joined: Mar 2017
Reputation: 0
Post: #23
(2017-05-06 14:34)daniels Wrote:  Congratulations on being accepted this year! I'm one of the Wayland developers, and would be more than happy to help and answer any questions you have, either here or as 'daniels' on Freenode.

Using wp_presentation would be really great. Unfortunately it's still not implemented in Mutter for boring internal infrastructural reasons, but I'd expect that to change soon.

Native YUV support would also be really nice to have, though again, Mutter doesn't yet support it. :\ The news article linked to is very old, which mainly describes work with the wl_drm protocol, something which is an internal implementation detail of Mesa and thus not supported on other platforms. We do also reserve the right to break that interface, so please don't rely on it! Instead, I'd be looking more towards the zwp_linux_dmabuf_v1 interface, which is again supported by Weston, and will be supported by Mutter within the next couple of months I hope.

Using dmabuf directly allows you to be in full control of the presentation layer, not having to route around EGL/VA-API's own presentation support. If this is a viable option then I'd personally be very happy to see it, though I wouldn't be surprised if this required a fair bit of rework on Kodi, so please don't necessarily take this as good advice. Smile This is the route VA-API is going down as well: allowing clients to access dmabuf handles directly and drive presentation themselves.

Best of luck with your summer, and please get in touch if I can help at all!

Thanks! Having a Wayland developer around to ask directly will certainly come in handy :-) As FernetMenta already said though, support for non-RGB surfaces will probably be very low priority as far as this GSoC project goes.

Allow me to ask one quick question: Are you aware of any "good" C++ bindings for Wayland?
find quote
yolk Offline
Junior Member
Posts: 11
Joined: Mar 2017
Reputation: 0
Post: #24
Hi community,

I've set up a GitHub repository for my GSoC project: https://github.com/pkerling/xbmc

I've added the tasks from the proposal as issues there so you can track the progress. Do feel free to comment there, test the code, report bugs. etc. as soon as there is something to test :-)
find quote
yolk Offline
Junior Member
Posts: 11
Joined: Mar 2017
Reputation: 0
Post: #25
Hi again,

basic Wayland support is now integrated into the master branch at https://github.com/pkerling/xbmc

To summarize what I did over the first two weeks of coding period:
  • Decide on what library to use for communication with the compositor: I tried out libwayland-client, the old Kodi-specific Wayland protocol wrappers (from the prior implementation from 2013 or so) and waylandpp. In the end I chose waylandpp because the object-based protocol of Wayland really shines when used with C++ (which rules out libwayland-client, which is C) and the home-grown C++ wrappers were too much of a burden to write by hand, considering that the list of interfaces and methods that needs to be supported grows with each feature. waylandpp seemed abandoned at first but I put out a few pull requests on GitHub which the author promptly merged. This is a very good sign and means that it probably won't be necessary to maintain a Kodi-specific fork of waylandpp. I do have a fork on GitHub though that should be used for the time being for compiling Kodi since I often add new features to waylandpp, use them in Kodi, and then submit a PR to the waylandpp owner afterwards, which means that necessary features might not be upstreamed yet. I expect this to change once things get more stable.
  • Finish porting all features that the prior implementation had - and more :-) among the additional features are things such as full Unicode input (think e.g. Cyrillic), having a window icon in the task switcher, not displaying a mouse cursor when no mouse input is available, and selecting a monitor for fullscreen display
  • Work on getting VAAPI support back (in-progress)
  • Start xdg_shell integration
  • Do some clean-up in Kodi related to the event system which was based on SDL, which is thoroughly obsolete by now

I regularly update the project board which should give you an idea about what I'm working on at the moment. Next step is cleaning up VAAPI and xdg_shell support.

In the end I did rewrite most of the prior implementation, but I used it a lot for reference. Note that there is nothing inherently wrong with the old code, the basic stuff did work after all. It was just using a deprecated implementation model (CWinSystemEGL) that was slated for removal anyway because it does not make sense from an architectural POV - EGL is not a windowing system, just a means to get a GL surface. Actually, most other code depending on CWinSystemEGL has also already been refactored now. Additionally, the old implementation included a whole lot of code for integration testing that I decided not to add. It complicates the implementation a lot, is very time-intensive to port and maintain, and provides not enough benefit to make up for it in my opinion.

The repository also has basic build instructions, and it would be cool if some interested parties would get the code, try to run it on their compositor of choice and report the results.
find quote
Post Reply