Kodi Community Forum
Broken Crunchyroll [DMCA Takedown] - Printable Version

+- Kodi Community Forum (https://forum.kodi.tv)
+-- Forum: Support (https://forum.kodi.tv/forumdisplay.php?fid=33)
+--- Forum: Add-on Support (https://forum.kodi.tv/forumdisplay.php?fid=27)
+---- Forum: Video Add-ons (https://forum.kodi.tv/forumdisplay.php?fid=154)
+---- Thread: Broken Crunchyroll [DMCA Takedown] (/showthread.php?tid=129709)

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46


RE: Crunchyroll Takeout v0.7.2 - le__ - 2013-10-17

v1n1c1us, are you using pt-BR locale?
I saw an older post of you with something about Legendas.TV, so I believe this is the case.

There seems to have some problem with the pt-BR locale when the string being converted to utf-8 twice (on crunchy_scraper.py in line 263, and again on line 273), I reported this to yoshi though PM some time ago, but maybe he missed that message.

Anyway, to fix it:
Open:
Code:
\addons\plugin.video.crunchyroll-takeout\resources\lib\crunchy_scraper.py

Search (on current version is at line 273):
Code:
ex = 'XBMC.Notification("'+notice_msg.encode("utf8")+':","'+login_try_msg.encode("utf8")+'...", 3000)'

Replace With (basically remove those two .encode("utf8")):
Code:
ex = 'XBMC.Notification("'+notice_msg+':","'+login_try_msg+'...", 3000)'

Then the queue works again, even when using the pt-BR locale.


EDIT:
Maybe the line 293
Code:
ex = 'XBMC.Notification("'+notice_msg.encode("utf8")+':","'+setup_msg.encode("utf8")+'.", 3000)'

and the line 299
Code:
ex = 'XBMC.Notification("'+notice_msg.encode("utf8")+':","'+dl_queue.encode("utf8")+'", 3000)'

should also get the .encode("utf8") removed, due to same thing, the strings already got encoded in utf8 before (at lines 265 and 267).


RE: Crunchyroll Takeout v0.7.2 - yoshiofthewire - 2013-10-17

I pushed this in to a change.
I think I might have missed it or something last time I did a major overhaul.

I also pulled that in the 2 other lines bellow the one in question.

Now if only I can get my .po pulled for translations of the interface.


RE: Crunchyroll Takeout v0.7.2 - v1n1c1uS - 2013-10-18

le__ Thanks for the answer. And yes I'm using pt-br locale.

I will try to do this changes on the weekend and report the results back here.

yoshiofthewire Thanks for the plugin.


RE: Crunchyroll Takeout v0.7.2 - yoshiofthewire - 2013-10-18

v1n1c1uS, just update the plugin, I pushed the changes this afternoon


RE: Crunchyroll Takeout v0.7.2 - v1n1c1uS - 2013-10-18

(2013-10-18, 01:33)yoshiofthewire Wrote: v1n1c1uS, just update the plugin, I pushed the changes this afternoon

Ok, Thank you. Better this way Tongue


RE: Crunchyroll Takeout v0.7.2 - v1n1c1uS - 2013-10-18

I did the update, now it's working.

Thanks.


RE: Crunchyroll Takeout v0.7.2 - Boxiom - 2013-10-23

Just wondering, in the main thread it says 480p will show as SD. Does that simply mean it says SD when playing the video (I noticed it did), or will the video actually not be in 480p?

Anyway, thanks for a great plugin.


RE: Crunchyroll Takeout v0.7.2 - yoshiofthewire - 2013-10-24

XBMC will say SD when playing 480p


RE: Crunchyroll Takeout v0.7.2 - calib3r - 2013-10-27

Is the only way to get HD playback is to have premium?


RE: Crunchyroll Takeout v0.7.2 - yoshiofthewire - 2013-10-28

Yes


RE: Crunchyroll Takeout v0.7.2 - calib3r - 2013-10-29

Do the watched labels not work? Also is there a way to get the information for the shows, like from scrapers?


RE: Crunchyroll Takeout v0.7.2 - le__ - 2013-10-30

Recently Crunchyroll added two new series exclusively to their portuguese/spanish regions: Seiken no Blacksmith and Dance in the Vampire Bund.

Since those titles are unavailable on the USA, both series wheren't showing up in the plugin, so I played a little and added another option to the settings page: Lineup region.
If it's set to US, it will stay as usual.
If it's set to BR, ES or FR, it will display only the titles that have subs in the corresponding language, including the titles that may be exclusive for that region.

Unlike the US page that several titles has been translated to english, most of the titles in Brazil use the japanese names, but for some weird reason, they are alphabetically ordered according to the english names on the main listing ("Daiya no A" was coming before "Ai-Mai-Mi", for example), so I had to add a sorting to the title listing to make it really in alphabetical order.

I also noticed that Naruto Shippuuden, Hunter X Hunter, Kuroko, etc, was showing up duplicated in the main list or always being showed in the end of every genre list, then I founnd out that the scrapper was also getting the videos from the "Popular Series" on the sidebar widget, so I changed the attrs={"itemtype":"http://schema.org/TVSeries"} to attrs={"class":"hover-bubble group-item"} and then only the right items are showing up.

Here's the diffs, feel free to use or modify it. Smile

settings.xml
Code:
--- D:\XBMC_testes\!original\plugin.video.crunchyroll-takeout\resources\settings.xml    Tue Oct 29 14:57:18 2013
+++ D:\XBMC_testes\portable_data\addons\plugin.video.crunchyroll-takeout\resources\settings.xml    Tue Oct 29 15:22:38 2013
@@ -12,4 +12,5 @@
    <setting id="useSubs" label="Disable subtitles" type="bool" default="false"/>
     <setting id="useSRTSubs" label="Use unstyled subtitles" type="bool" default="false"/>
     <setting id="subsLang" type="enum" lvalues="100002|100003|100004|100005|100006" label="100001" default="0" />
+    <setting id="lineupRegion" type="enum" values="US|BR|ES|FR" label="Lineup Region" default="0" />
</settings>

crunchy_scraper.py
Code:
--- D:\XBMC_testes\!original\plugin.video.crunchyroll-takeout\resources\lib\crunchy_scraper.py    Tue Oct 29 14:57:17 2013
+++ D:\XBMC_testes\portable_data\addons\plugin.video.crunchyroll-takeout\resources\lib\crunchy_scraper.py    Tue Oct 29 22:14:19 2013
@@ -17,6 +17,7 @@
from BeautifulSoup import BeautifulSoup

__settings__ = sys.modules[ "__main__" ].__settings__
+lineupRegion = __settings__.getSetting("lineupRegion")

class _Info:
    
@@ -38,10 +39,12 @@
                 item = {}
                 print "Crunchyroll Takeout: --> in parseTitleList"
                 soup_title = BeautifulSoup(feed_title)
-                queue_list = soup_title.findAll('li',attrs={"itemtype":"http://schema.org/TVSeries"})
+                queue_list = soup_title.findAll('li',attrs={"class":"hover-bubble group-item"})
+                # queue_list = soup_title.findAll('li',attrs={"itemtype":"http://schema.org/TVSeries"})
+                queue_lists = sorted(queue_list, key=lambda elem: elem.text.lower())
                 num_series = len(queue_list)
                 print "Crunchyroll Takeout: -->number of found series "+str(num_series)
-                for queue_series in queue_list:
+                for queue_series in queue_lists:
                         if queue_series is not None:
                                 #print queue_series
                                 item['name'] = queue_series.a['title']
@@ -54,7 +57,8 @@
                 print "Crunchyroll Takeout: --> in parseSpBoxScrappedSeries"
                 item = {}
                 soup_title = BeautifulSoup(feed)
-                queue_list = soup_title.findAll('li',attrs={"itemtype":"http://schema.org/TVSeries"})
+                queue_list = soup_title.findAll('li',attrs={"class":"hover-bubble group-item"})
+                # queue_list = soup_title.findAll('li',attrs={"itemtype":"http://schema.org/TVSeries"})
                 num_series = len(queue_list)
                 print "Crunchyroll Takeout: -->number of found series "+str(num_series)
                 for queue_series in queue_list:
@@ -127,6 +131,19 @@
        self.episodes_list = []
        
        
+    def getRegion(self):
+        # print 'CRUNCHYROLL: --> Lineup: '+lineupRegion
+        if lineupRegion == "0":
+            lineupRegions = "en-us"
+        if lineupRegion == "1":
+            lineupRegions = "pt-br"
+        if lineupRegion == "2":
+            lineupRegions = "es-es"
+        if lineupRegion == "3":
+            lineupRegions = "fr-fr"
+        print 'CRUNCHYROLL: --> Using lineup from: '+lineupRegions
+        return lineupRegions
+        
    def getEpisodeListing(self, url):
        full_url = "http://www.crunchyroll.com"+url
        id = url.replace('/','')
@@ -137,7 +154,8 @@
            rssFeed = usock.read()
        else:
            opener = urllib2.build_opener()
-            opener.addheaders = [('User-Agent','curl/7.16.3 (Windows  build 7600; en-US; beta) boxee/0.9.21.12594'),('Accept-Encoding','deflate, gzip'),('Accept-Charset','ISO-8859-1,utf-8;q=0.7,*;q=0.7'),('Accept-Language','en-us,en;q=0.5')]
+            opener.addheaders = [('User-Agent','curl/7.16.3 (Windows  build 7600; en-US; beta) boxee/0.9.21.12594'),('Accept-Encoding','deflate, gzip'),('Accept-Charset','ISO-8859-1,utf-8;q=0.7,*;q=0.7'),('Accept-Language',self.getRegion()+',en;q=0.5')]
+            # opener.addheaders = [('User-Agent','curl/7.16.3 (Windows  build 7600; en-US; beta) boxee/0.9.21.12594'),('Accept-Encoding','deflate, gzip'),('Accept-Charset','ISO-8859-1,utf-8;q=0.7,*;q=0.7'),('Accept-Language','en-us,en;q=0.5')]
            usock = opener.open(full_url)
            rssFeed = usock.read()
            if usock.headers.get('content-encoding', None) == 'gzip':
@@ -161,7 +179,8 @@
            rssFeed = usock.read()
        else:
            opener = urllib2.build_opener()
-            opener.addheaders = [('User-Agent','curl/7.16.3 (Windows  build 7600; en-US; beta) boxee/0.9.21.12594'),('Accept-Encoding','deflate, gzip'),('Accept-Charset','ISO-8859-1,utf-8;q=0.7,*;q=0.7'),('Accept-Language','en-us,en;q=0.5')]
+            opener.addheaders = [('User-Agent','curl/7.16.3 (Windows  build 7600; en-US; beta) boxee/0.9.21.12594'),('Accept-Encoding','deflate, gzip'),('Accept-Charset','ISO-8859-1,utf-8;q=0.7,*;q=0.7'),('Accept-Language',self.getRegion()+',en;q=0.5')]
+            # opener.addheaders = [('User-Agent','curl/7.16.3 (Windows  build 7600; en-US; beta) boxee/0.9.21.12594'),('Accept-Encoding','deflate, gzip'),('Accept-Charset','ISO-8859-1,utf-8;q=0.7,*;q=0.7'),('Accept-Language','en-us,en;q=0.5')]
            usock = opener.open(full_url)
            rssFeed = usock.read()
            if usock.headers.get('content-encoding', None) == 'gzip':
@@ -187,7 +206,8 @@
            rssFeed = usock.read()
        else:
            opener = urllib2.build_opener()
-            opener.addheaders = [('User-Agent','curl/7.16.3 (Windows  build 7600; en-US; beta) boxee/0.9.21.12594'),('Accept-Encoding','deflate, gzip'),('Accept-Charset','ISO-8859-1,utf-8;q=0.7,*;q=0.7'),('Accept-Language','en-us,en;q=0.5')]
+            opener.addheaders = [('User-Agent','curl/7.16.3 (Windows  build 7600; en-US; beta) boxee/0.9.21.12594'),('Accept-Encoding','deflate, gzip'),('Accept-Charset','ISO-8859-1,utf-8;q=0.7,*;q=0.7'),('Accept-Language',self.getRegion()+',en;q=0.5')]
+            # opener.addheaders = [('User-Agent','curl/7.16.3 (Windows  build 7600; en-US; beta) boxee/0.9.21.12594'),('Accept-Encoding','deflate, gzip'),('Accept-Charset','ISO-8859-1,utf-8;q=0.7,*;q=0.7'),('Accept-Language','en-us,en;q=0.5')]
            usock = opener.open(full_url)
            rssFeed = usock.read()
            if usock.headers.get('content-encoding', None) == 'gzip':
@@ -242,7 +262,8 @@
            rssFeed = usock.read()
        else:
            opener = urllib2.build_opener()
-            opener.addheaders = [('User-Agent','curl/7.16.3 (Windows  build 7600; en-US; beta) boxee/0.9.21.12594'),('Accept-Encoding','deflate, gzip'),('Accept-Charset','ISO-8859-1,utf-8;q=0.7,*;q=0.7'),('Accept-Language','en-us,en;q=0.5')]
+            opener.addheaders = [('User-Agent','curl/7.16.3 (Windows  build 7600; en-US; beta) boxee/0.9.21.12594'),('Accept-Encoding','deflate, gzip'),('Accept-Charset','ISO-8859-1,utf-8;q=0.7,*;q=0.7'),('Accept-Language',self.getRegion()+',en;q=0.5')]
+            # opener.addheaders = [('User-Agent','curl/7.16.3 (Windows  build 7600; en-US; beta) boxee/0.9.21.12594'),('Accept-Encoding','deflate, gzip'),('Accept-Charset','ISO-8859-1,utf-8;q=0.7,*;q=0.7'),('Accept-Language','en-us,en;q=0.5')]
            usock = opener.open(full_url)
            rssFeed = usock.read()
            if usock.headers.get('content-encoding', None) == 'gzip':



RE: Crunchyroll Takeout v0.7.2 - yoshiofthewire - 2013-10-31

Updated


RE: Crunchyroll Takeout v0.7.2.2 2013.10.30 - yoshiofthewire - 2013-10-31

(2013-10-29, 21:39)calib3r Wrote: Do the watched labels not work? Also is there a way to get the information for the shows, like from scrapers?

unfortunately, no and no

It may be possible to display more show data, but at the cost of either more fragile code, or significantly longer loading times.


RE: Crunchyroll Takeout v0.7.2.2 2013.10.30 - yoshiofthewire - 2013-11-21

Pre-warning: Crunchyroll just updated their Android app and are now using DRM.