• 1
  • 3
  • 4
  • 5
  • 6
  • 7(current)
[GSoC] GPU hardware assisted H.264 decoding via OpenGL GLSL shaders - developers only
#91
george_k Wrote:Saw this on Technical Note TN2267 for OS X 10.6.3 and thought it might be interesting:



http://developer.apple.com/mac/library/t...n2267.html

Too slow, VDADecoder is already in xbmc, http://forum.xbmc.org/showpost.php?p=529...stcount=47
Reply
#92
Stip Wrote:Hello!

I'm also interested in Rudd's partial OpenGL implementation, and will likely study his code. I am in a university project as well, and will be looking into implementing h.264 (and hopefully the SVC extension of it) on GPUs using OpenGL ES. I will use ES, as I'd like to target systems on a chip, such as the one found in the BeagleBoard.

Even though an OpenGL ES implementation will likely be slightly different than an OpenGL implementation, the algorithms involved should probably be pretty similar.

Good luck, kasbah, if you intend to attempt continuing Rudd's work!

Hi, interesting!

Were will you develop? For XBMC or outside as a kind of library?

Just worth noting is that XBMC on BeagleBoard is rather slow and it seems to be because GPU is stressed, however I'll be working on this during the summer so it might be possible in the end Smile.
If you have problems please read this before posting

Always read the XBMC online-manual, FAQ and search the forum before posting.
Do not e-mail XBMC-Team members directly asking for support. Read/follow the forum rules.
For troubleshooting and bug reporting please make sure you read this first.

Image

"Well Im gonna download the code and look at it a bit but I'm certainly not a really good C/C++ programer but I'd help as much as I can, I mostly write in C#."
Reply
#93
Hello!

My work is not really related to XBMC, so I will not (at first at least) code for XBMC specifically. If I understand correctly, Rudd's work was done within the FFMPEG (libavcodec) code, so I will probably build on this code and make it use OpenGL ES 2.0.

As this is a university project, the emphasis is on providing proofs of concept on how various parts of video decoding in popular codecs can be done on the GPU rather than on making a fully working GPU decoder. Hopefully something useful will still come out of it though :p

I suppose the GPU stress comes from the XBMC UI then, or is something stressful done on the GPU while decoding video already?
Reply
#94
Hi Stip,

My work is not strictly related to XBMC at the moment either. I am merly throwing together a proof of concept. Progress is very slow (extremely slow considering my deadlines) but once I get something working I am planning to contribute towards XBMC even after I graduate.

As far as I understand OpenGL ES 2.0 is virtually indistiguishable from OpenGL 2.0 except for some subtleties so we are working on very similar things.

Unfortunately Rudd left the code in an uncompiling, half working state. As far as I can see he did not complete his project. I have hacked the code a bit and made it compile. Have a look at my github repo:http://github.com/kasbah/gsoc. The readme there explains a bit more. Be aware that this repo is mostly just for me and may not be overly useful to you.

If you compile the ffmpeg (not ffmpeg-orig ) on there and use Rudd's player you will find a program that outputs the first non-I luminance frame bound as a texture. There are some macroblocks missing (black blocks). And thats all it does at the moment. I am not sure if it does any motion compensation, I think it binds the already fully decoded frame. I am looking at this at the moment.

I am welcoming any contributions towards this idea at all. Please do have a look at the code and tell me what you make of it (you can take a look at the changes I made to make it compile for instance--tell me if you disagree with them). A fresh look at this could probably clear some things up.

What is the timeline for your project?
Reply
#95
Hello Kasbah,

Yes, the differences shouldn't be that big between OpenGL and OpenGL ES 2.0, I think. Some things like 3D textures, which Rudd seems to use for storing the decoded picture buffer may have to be changed to multiple 2D textures and so on, but the algorithms should be the same anyway.

I did stumble upon your Git repository a while ago, and compiled the code using that. I don't have anything to complain about there. It compiles Big Grin

I was under the impression that it does do motion compensation on the GPU. I haven't looked at h264.c (in libavcodec) in much detail, but it seems like the implementation first decodes the first I-frame on the CPU, and transfers that frame to the 3D texture dpb_tex. This texture will then be used as a reference frame when doing motion comp. on the GPU.

It seems to perform the actual motion compensation in render_mbs (calls render_one_block properly for different block types), render_one_block (puts motion vectors and block coordinates in vertices and texture coordinates), and draw_mbs (actually executes things on the GPU).
Draw_mbs() seems to use six different shaders to do motion compensation in six passes. The remaining black blocks seem to be intra coded blocks, which are not handled by the GPU in that code.

glutMainLoop in gpu_motion() never returns which is the reason for it not doing anything after the second frame.

Hopefully we can learn something from each other during our projects. I don't have a real deadline yet, but am supposed to come up with some kind of benchmarks and demo before July. I do not expect to have a full h.264 GPU implementation by that time though Wink

Right now I'm focusing on porting the code to OpenGL ES 2.0 and SDL, since GLUT and GLEW do not seem to support ES.
Reply
#96
Stip Wrote:I did stumble upon your Git repository a while ago, and compiled the code using that. I don't have anything to complain about there. It compiles Big Grin
Glad it was of help. Maybe you could fork the project on github and we could keep track of eachothers work more easily.

Stip Wrote:I was under the impression that it does do motion compensation on the GPU. I haven't looked at h264.c (in libavcodec) in much detail, but it seems like the implementation first decodes the first I-frame on the CPU, and transfers that frame to the 3D texture dpb_tex. This texture will then be used as a reference frame when doing motion comp. on the GPU.

It seems to perform the actual motion compensation in render_mbs (calls render_one_block properly for different block types), render_one_block (puts motion vectors and block coordinates in vertices and texture coordinates), and draw_mbs (actually executes things on the GPU).
Draw_mbs() seems to use six different shaders to do motion compensation in six passes. The remaining black blocks seem to be intra coded blocks, which are not handled by the GPU in that code.

glutMainLoop in gpu_motion() never returns which is the reason for it not doing anything after the second frame.
Thanks for this clarification. That does make more sense. gpu_dpb (EDIT: not dpb_tex as i had written previously) was not in the definition of "Picture" so I added it to make it compile. Just seemed like a strange state to leave your code in. But I guess we don't know what happend to him because he is completely MIA. I have contacted Dark who was the mentor on this project and asked him to have a look as well. But he is quite busy at the moment.
Stip Wrote:Hopefully we can learn something from each other during our projects. I don't have a real deadline yet, but am supposed to come up with some kind of benchmarks and demo before July. I do not expect to have a full h.264 GPU implementation by that time though
Great, looks like we have helped eachother a little already. Is your project for the year though? For 3 years? 3 Months? Mine needs to be demonstrated in one month No .
Reply
#97
kasbah Wrote:Glad it was of help. Maybe you could fork the project on github and we could keep track of eachothers work more easily.
Maybe later. I'm porting it to ES 2.0 and SDL at the moment, so the code is a bit messy Wink Seems like the port wasn't that straightforward.

kasbah Wrote:Thanks for this clarification. That does make more sense. dpb_tex was not in the definition of "Picture" so I added it to make it compile. Just seemed like a strange state to leave your code in.
Yes, a bit strange. Maybe he got a better job offer from Nvidia or ATI Tongue

kasbah Wrote:Great, looks like we have helped eachother a little already. Is your project for the year though? For 3 years? 3 Months? Mine needs to be demonstrated in one month No .
One month? I don't envy you!
This GPU decoding stuff is only part of the project I'm in, so it's a bit unclear how much time I will spend on this part. My part in the project is really about transcoding video. I think the university project itself will go on for years.
Reply
  • 1
  • 3
  • 4
  • 5
  • 6
  • 7(current)

Logout Mark Read Team Forum Stats Members Help
[GSoC] GPU hardware assisted H.264 decoding via OpenGL GLSL shaders - developers only3