(2014-01-21, 19:01)Ned Scott Wrote: (2014-01-21, 18:41)jacintech.fire Wrote: @Kib
I used a central MYSQL database to synch the library across clients, then share the WHOLE user data directory...
Been doing that for 3.5 years...
Some people have never put on seat belts and haven't been in car accidents for 3.5 years either, but that doesn't make it a good idea.
He doesn't understand... He's at best like a handyman who you call over for anything that's broken in your house... He has "always used duct-tape" (i.e. partitioned) so he uses that to fix everything. He has always used caulk and doesn't know there are different types for different purposes (e.g. just because he's used "MYSQL" [his emphasis to point out that you didn't read properly] without understanding that the MySQL portion is only for a portion of the data, and the rest [e.g. textures] is actually STILL stored in SQLite databases, stored within %userdata%)... This low level handyman, thinks he's a Master Of All Traits (a "hacker" as he's put it) in his field.
facintech.fire Wrote:a) 512 1TB: No reason whatsoever. I have always partitioned my drives, figuring that if something happens (loosing the index, for example), it would be restricted to 1TB of data and not the whole drive.
b) Duplicate Titles: Streaming a 10GB movie over wifi (or remotely) is a crap shot. Sometimes it works, most times it does not. A 650-750MB .mp4 file on the other hand, is far more forgiving.
c) RAID: Scaling to a petabyte using RAID is <insert adjetive>!!! Knowing that, I decided to read up on alternatives. Distributed filesystem, object storage and the rest seem to offer promise that RAID does not.
d) How cool would it be to try something new...see where it would take me...
So to add to your contradictions / insanity:
a) You're not a hacker / experiment if you "have always done it this way" (see "a" above, which have also repeated multiple times elsewhere)
b) Your setup is broken. You have a Lamborghini which you have parted out to make a Pinto. You are trying to drive your "Linto" on the Nuremberg Ring... IF you had a "thoughtful" setup, you could have had your cake and ate it too... You could have ripped the disks in loss-less within any container (mp4, mkv, whatever), and if you really wanted, stored a second lower quality copy for lower bandwidth consumption. Realistically, your bigger problem is that you're trying to shoe-horn everything into an XBMC world that currently isn't built for that.. If you had ran a secondary server on the side (e.g. AirVideo if you have an iOS only environment, or PS3 Media Server if a mixed environment that supports UPnP streaming), you could have even had that original high quality single copy be real-time transcoded out to all your clients... NEITHER of these is a bad idea... NOT storing the original in a very high quality setting when you
claim to have 512 TB of storage is insanity, because I promise you that in 5 years, you'll be re-ripping ALL of your media in a higher quality setting. Fun "tinkering" there...
c) You don't seem to know why you say "scaling to a petabyte using RAID is...".... It's probably just something you've read somewhere and are regurgitating. Furthermore, you probably skipped the reason WHY they said that. In reality, the main reason why someone would say that is because RAID schemes are based on parity, and so if more than X drives fail simultaneously, data is lost. You however CAN'T just make a blanked statement. Why? Because the number of disks that can fail before you're toast depends on the RAID level being used. Also because the number of disks is dependent on the size of each disk. Also because the H/W used to run the RAID plays a major role in this (read: every part from the RAID controller, to the disks themselves).
d) It would be cool... But you're not doing that... You're doing (to paraphrase your point "a"), "what you have always done"...