Posts: 751
Joined: Jul 2013
Reputation:
48
2017-07-03, 10:55
(This post was last modified: 2017-07-03, 10:57 by meowmoo.)
All good at the End, no 503 errors I the log, and all information is there
Posts: 50
Joined: Feb 2010
Reputation:
0
2017-07-08, 09:39
(This post was last modified: 2017-07-08, 09:40 by JanneT.)
Dave,
did you introduce delays in the scraping process?
If so, is there a way to turn them off?
The reason for asking is that I have put together a simple web application with corresponding scrapers for artists and albums. Now, this process is an order of magnitude slower than before. It used to pretty fast compared to MusicBrainz etc. Ok, I have to manually update the database. But for some reasons, with the music I normally listen to, I have to do some manual work anyway. And MusicBrainz is not that user friendly, At least I have a Web GUI to handle the content with Cut-and-Paste etc. :-)
- Janne
Posts: 17,859
Joined: Jul 2011
Reputation:
371
There's a difference between scraping and scanning.
Also it's a one time job so it doesn't matter that it takes longer the first time.
Posts: 20,221
Joined: Apr 2017
Reputation:
1,341
As Martijn said, the initial library scrape is a one time process. How many times are you scraping your library and why??
Posts: 4,545
Joined: Jun 2015
Reputation:
269
Collaboration albums (multiple album artists) are a problem for scraping NFOs and artwork, something I hope to address.
Meanwhile creating a single import file and importing it could work - play with export to a single file and see what you get. Remember for music export/import only contains the scraped album and artist data, not the things derrived from tags.