I created a ~2MiB
file.
dd if=/dev/urandom of=file.bin bs=2M count=1
Then I copied that file a large number of times and generated a checksum for each (identical) copy.
for i in `seq 50000`;
do
name="file.${i}.bin"
cp file.bin "${name}"
sha512sum "${name}" > "${name}.sha512"
done
I then verified all of those checksummed files with a validation script to run sha512sum
against each file.
for file in `find . -regex ".*\.sha512"`
do
sha512sum --check --quiet "${file}" || (
cat "${file}" && sha512sum "${file%.sha512}"
)
done
I just created these files, and when I validate them moments later, I see intermittent failures and inconsistencies in the data (console text truncated for readability)
will:/mnt/usb $ for file in `find ...
file.5602.bin: FAILED
sha512sum: WARNING: 1 computed checksum did NOT match
91fc201a3812e93ef3d4890 ... file.5602.bin
b176e8e3ea63a223130f3a0 ... ./file.5602.bin
The checksum files are all identical since the source files are all identical
The problem seems to be that my computer is, seemingly at random, generating the wrong checksum for some of my files when I go to validate. A different file fails the checksum every time, and files that previously failed will pass.
will:/mnt/usb $ for file in `find ...
sha512sum: WARNING: 1 computed checksum did NOT match
91fc201a3812e93ef3d4890 ... file.3248.bin
442a1d8805ed134c9ab5252 ... ./file.3248.bin
Keep in mind that all of these files are identical.
I see the same behavior with SATA SSD and HDD, and USB devices, with md5 and sha512, with xfs, btrfs, ext4, and vfat. I tried live booting to another OS. I see this same stranger behavior regardless. I also see rsync --checksum
for these files thinks checksums are wrong and re-copies these files even though they have not changed.
What could explain this behavior? Since it's happening on multiple devices with all the scenarios I described, I doubt this is bit rot. My kernel logs show no obvious errors. I would assume this is a hardware issue based on my troubleshooting, but how can this be diagnosed? Is it the CPU, the motherboard, the RAM?
What could explain this behavior? How can this be diagnosed?
From what I've read, a number of issues could explain this behavior. Bad disk(s), bad PSU (power supply), bad RAM, filesystem issues.
I tried the following to determine what was happening. I repeated the experiment with different...
Nothing seemed to resolve this.
Finally, I gave memtest86+ v5.0.1 a try via an Ubuntu live USB.
voila. It found bad memory. Through process of elimination I determined one of my memory sticks was bad, and then tested the other over night to ensure it was in good shape. I re-ran my experiment again and I am seeing consistent checksums on all my files.
What a subtle bug. I only noticed this bad behavior by accident. If I hadn't been messing around with file checksums, I do not think I would have found this bad RAM.
This makes me want to regularly schedule a routine in which I verify and test my RAM. A consequence of this bad memory stick is that some of my test data did end up corrupt, but more often than not, the checksum verifications were just interimmitent failures.
In one sample data pool, all the checksums start with cb2848ca0e1ff27202a309408ec76...
, because all ~50,000 files are identical.
Though, there are two files that are corrupt, but this is not bit rot or file integrity damage.
What seems most likely is that these files were created with corruption because cp
encountered bad RAM when I created these files. Those files consistently return bad checksums of 58fe24f0e00229e8399dc6668b9...
and bd85b51065ce5ec31ad7ebf3...
, while the other 49,998 files return the same checksum.
This has been a fun extremely frustrating experiment in debugging.