2018-02-22, 19:21
Hi,
I’m Mehtab Zafar, a CS undergrad from Delhi - India. I’m interested in working on Kodi’s addon checker idea during GSoC.
For the past two weeks I’ve been contributing to the addon-check tool and have a good handle on the problem that we’re trying to solve. I’ve also been researching what my GSoC proposal would look like.
Broadly, the goal of the project is to reduce the workload of addon reviewers by automating as much work as possible and setting a high bar for the code quality of addons.
Kodi’s roadmap seems to be pushing for Python 3 with Kodi v19 being planned to be Python 3 only. As we already have working Python 3 support (due to arpit’s last year GSoC project.), the main focus is now on porting all the addon code to Py3 as well. One of the ways to achieve this is to add python 3 detection in the addon-check tool so that addon creators find out the issues and can gradually fix them. I’ve created a separate issue for this on the repo suggesting an approach.
Apart from addons being Python 3 compatible, we also want them to be of good quality which we will do by static code analysis.
Code analysis can be divided into two broad types - Python specific checks, or general code quality metrics:
Just as a concrete example, I ran PyLint on the YouTube Kodi Addon which found some minor issues and gave it a rating of 7/10 showing that code is generally of good quality, while running it on Reddit Viewer gave a rating of -1/10. Both of these addons turned out to have similar Cyclomatic Complexity: 2.8 (for youtube) and 4 (for reddit.)
Tools like PyLint are more actionable as there is, for most of the cases, a direct way to fix the error being reported, but the same is not true for complexity metrics so reducing them might involve restructuring the entire code and will need a lot of effort on the part of addon developer.
Another thing to keep in mind with these metrics is that addon developers might be hobbyists so they might not really be familiar with these technical terms (I only learnt about some of them in a Software Engineering course I took.) So if we’re planning to add them, we’ll have to do some work on making the developers understand them as well.
This is what I’ve been able to think about the project till now. I’m also looking into what other projects use python addons and how they handle code review etc. I’ll post my findings here. Till then, I’d love some review/feedback on my approach.
I’m Mehtab Zafar, a CS undergrad from Delhi - India. I’m interested in working on Kodi’s addon checker idea during GSoC.
For the past two weeks I’ve been contributing to the addon-check tool and have a good handle on the problem that we’re trying to solve. I’ve also been researching what my GSoC proposal would look like.
Broadly, the goal of the project is to reduce the workload of addon reviewers by automating as much work as possible and setting a high bar for the code quality of addons.
Kodi’s roadmap seems to be pushing for Python 3 with Kodi v19 being planned to be Python 3 only. As we already have working Python 3 support (due to arpit’s last year GSoC project.), the main focus is now on porting all the addon code to Py3 as well. One of the ways to achieve this is to add python 3 detection in the addon-check tool so that addon creators find out the issues and can gradually fix them. I’ve created a separate issue for this on the repo suggesting an approach.
Apart from addons being Python 3 compatible, we also want them to be of good quality which we will do by static code analysis.
Code analysis can be divided into two broad types - Python specific checks, or general code quality metrics:
- For python specific things like PEP8 style etc. we could use PyLint which seems to cover everything from pep8 to error detection (and even refactoring etc.) It is also fully customisable via a config file.
- As far as generic code quality metrics are concerned - radon (& xenon) seems to be the tool of choice. It has the usual metrics like McCabe’s cyclomatic complexity etc. and a bunch of other ones.
Just as a concrete example, I ran PyLint on the YouTube Kodi Addon which found some minor issues and gave it a rating of 7/10 showing that code is generally of good quality, while running it on Reddit Viewer gave a rating of -1/10. Both of these addons turned out to have similar Cyclomatic Complexity: 2.8 (for youtube) and 4 (for reddit.)
Tools like PyLint are more actionable as there is, for most of the cases, a direct way to fix the error being reported, but the same is not true for complexity metrics so reducing them might involve restructuring the entire code and will need a lot of effort on the part of addon developer.
Another thing to keep in mind with these metrics is that addon developers might be hobbyists so they might not really be familiar with these technical terms (I only learnt about some of them in a Software Engineering course I took.) So if we’re planning to add them, we’ll have to do some work on making the developers understand them as well.
This is what I’ve been able to think about the project till now. I’m also looking into what other projects use python addons and how they handle code review etc. I’ll post my findings here. Till then, I’d love some review/feedback on my approach.