My Proposal (Open for discussion,Not Final)
  • Name: Ahmed ElShereef
  • Summary:
          Security matters!, We should Fuzz,Before bad guys do!
          we're going to use A Dynamic Security testing which is Fuzzing
         Software issues are mostly resulted from bad parsed input/output data/files,We should Catch the existing bugs/vulnerabilities in the libraries or in our dependency code,
         By Creating a coverage-guided fuzzer which is capable of exploring Kodi's Code/Files, Injected/Tested it with random inputs/data to find bugs that leads
         to make our Kodi crash or fail.
  • How will I achieve this:
    • the final proposal will contain the detailed project timeline
        To start the fuzzing process , We should :
  1. Setup our fuzzer with it's all components.
  2. Select the target file
  3. Create our input set (testcases)// Corpus
  4. Check the code Coverage to decide of we should continue or stop fuzzing
  5. Catch the bugs.
  6. Document the results in a folder/PDF
  • What will the project focus on:
                 It will focus in applying fuzz dynamic automated testing into our codebase in order to figure our the software bugs and vulnerabilities
                 the most important parts are:
                 Fuzzer Setup
                 Coverage rate
  • Benefits:
           For User, It should improve the Kodi's performance so a good UX.
           For Developer, a lot money and time, if we fuzz before being fuzzed! 
  • Goals:
            Discover vulnerabilities in Kodi , Documenting them and pass them to a passionate developer.

           Maybe In Next gsoc, we gonna try Reverse Engineering Protection As well as Exploit Development for more secure Software
  • What does it touch in Kodi:
         This section is very important , vital and have to be discussed , As Kodi is a GUI based/Run, these Libraries/Tools included should be responsible for doing
          its own fuzz testing internally (OSSFuzz), and if they don't fuzzing on their own,So we will need to set up fuzzing for these libraries.

           If they are adapting / having OSS-Fuzz service/System/techniue built-in in their codebase,We shouldn't fuzz that.

          e.g: As you mentioned ffmpeg which is handling video, audio, and other multimedia files and streams, it handles its own fuzzing,
  • case 1 :  "All the libraries are doing its fuzzing"
        We can fuzz Kodi's configuration data, So when it loads/parses its config files, that would likely be a suitable fuzzing target,unless is too just an external library
         that is already fuzzed
  • case 2 : " Fuzzing for the Libraries that aren't already fuzzed "
       How can figure out that ?By Checking out -> projects folder,check the list. if not there, We have to check too the repositories of those
        libraries if they have files which contain "LLVMFuzzerTestOneInput" // Oss-Fuzz service
  • Requirements:  'I have found that the (American Fuzzy Lop) fuzzer in the project description became outdated as last update was in 2017 and replaced with another powerful enhanced fuzzer AFLplusplus  '
  1. AFL++ (American Fuzzy Lop Plus Plus) Fuzzer
  2. Fuzz testing basics
  3. Building Fuzzers basics
  4. Linux Operating System (Not Mandatory but would enhance our results and testing process) ,As it's a suitable environment for the fuzzer
  5. Understanding A new Codebase
  6. Auditing C/C++ Code
  7. Basic Linux,Shell and Command line environment knowledge
  8. Version Control
       Note: Walking-through the project , we maybe forced to use/test another fuzzer and I think it would be LibAFL or LibFuzzer 
  • Possible mentors: @Razze 
  • Workload:To be discussed ,
                What I have discovered that , Fuzzing takes too much time and most important thing is how to decide to stop or continue fuzzing.
                It depends how coverage limit we would go for / covered? There may still be bugs, but they're not easy to reach,Maybe we could parallel fuzzers to test that.
I wonder if you would like the final proposal to follow the same outline or no ? Huh
Outline in general seems fine.
Quote:It depends how coverage limit we would go for / covered? There may still be bugs, but they're not easy to reach,Maybe we could parallel fuzzers to test that.

while you will need to run your stuff to test it, the important work you should be doing is getting us the tools, not executing them. That should be out of scope, but we can surely try to document some high level failures, if we find them.

Logout Mark Read Team Forum Stats Members Help
My Proposal (Open for discussion,Not Final)0
This forum uses Lukasz Tkacz MyBB addons.