


doing it your way you ended up with a single hash file for each disk right? Thinking more about it, I guess that is the best way to do it for the intended purpose of dealing with parity fails since you'd need to check the whole disk. Well right so doing it piece wise will at least let me maybe run bits at a time at night and during the work day followed by "time off" while I might be streaming flow Plex in the evening. I've been running with scissors for too long IMO by not having checksums. Yeah I had no intent to do regular checks, just when (not if) I every have a failed parity check. Well at this point I probably just need to get my hands dirty so I can ask smarter questions Thanks for the tips.
#Teracopy verify only windows
When you say I'll be prompted that checksums have been found are you referring to Corz's utility or md5deep? Cause my ideal would be to use md5deep CLI following by Corz' windows utility. Well yeah I too want to use Corz utility after the initial "slog". I simply prefer doing everything from Windows - and I really like the very simple interface of the Corz utility. If nothing's changed, it only takes a couple minutes to do the check. Note that once you've created your checksums, if you do another "Create checksums" on the same disk, it will prompt you that it's already found checksums - you then click on "Synchronize" and it will only recomputed them for folders that have changed. but this would have the same "issue" you've already noted - the Windows box would be busy for hours creating/verifying the checksums. Note there is also a Windows version of md5deep. You can create and verify checksums on the UnRAID box using the Linux utility md5deep. I seem to hover more around 80 even reading from my cache drive (640GB WD Black) which will surely impact overall hashing speed :-( I will also say, that my ability to reliably see 100+mB/s over my network is sketchy. Not to mention that either way the drive being scanned will still be slammed with reads possibly causing streaming issues. But I suppose you're correct that doing it natively from Unraid vs thru a gigabit network might result in about the same speed. I guess my concern is that with a saturated network and reads from any given drive I'll have problems streaming from Plex. You can, of course, still use your Windows box with no problem during these computations. Windows box) being busy for many hours while creating the checksums. With a Gb network, that's nearly as fast as if it was being done natively on the UnRAID box, but of course it also results in your client (e.g. Yes, it indeed has to read the entire file to create the checksums.
#Teracopy verify only full
Which of course brings me to my main question: is it possible to create the checksums directly on unraid the first time in such a way that the windows checksum app will be able to take over ongoing checksum creation and validation? I just have this image of my PC and Array running full bore for days hammering my network trying to transfer all my files Using the checksum windows program pointed at unraid, doesn't the program have to pull the entire file over before it can create the checksum? Given that, won't it take a veeerrrrry long time vice creating the checksums natively on the server the first time it is run? Garycase: I'm actually posting this based on your comments in this thread ( topicseen) but figured it would be less of a thread-jack if I posted here. I prefer the latter (so I can easily test any specific folder), but for what you want to do it would be much easier to have a single checksum file you could copy then do the "verify checksums" command on the array. You'd need to change the configuration file with the checksum utility so it puts all the checksums in a single file at the root - otherwise it will put the checksum file for each folder in that folder, which would be a lot of copying for you. I agree, creating the checksums before hand would be best. hash files to the server and verify the checksums on the server? It seems to me that should work. Would it work to create checksums on the original files/directories, then copy the.

Another way to ensure you have good copies is to first (BEFORE you copy the files) add a checksum to all of the folders then when they're on the server you can verify the checksums.
