Bad blocks are inherent in NAND flash. SLC NAND Flash devices are more reliable (have fewer errors) and costly. MLC NAND Flash devices are less reliable (have more inherent errors) but are affordable and easily available. NAND Flash devices are known to progressively degrade until the number of bad blocks is too high to reliably store data. Inherent errors during manufacturing increase on usage (both read and write.) Most Flash Storage Devices will ultimately become too error-prone to store data. The industry might want to justify inherent errors (and gradually increasing errors) by calling it a fingerprint. They are still searching for techniques to make NAND Flash more reliable.
The article fails to provide mathematical basis to prove that two NAND flashes cannot have the same bad blocks on manufacturing or at some point of usage thereby obscuring identity. NAND flash controllers are designed to check and resolve errors using known algorithms. Most controllers allow hardware to hide errors while allowing OS device drivers to read the NAND flash medium. The Operating System and the NAND Flash Controller are at least two points were any such fingerprint can be compromised. The Filesystem adds another layer of abstraction. The number of "Real" bad blocks and remaps is usually stored on the NAND Flash. Altering the Bad Block Table is not difficult.
Hard Disks interestingly have similar failure rates and complex issues like Data remanence which have been studied. I wonder why no one proposed a signature scheme for using errors on Hard Drive Platters to identify them. Computer Forensics for Hard Drives has a longer track record of being studied. Marketing fud can be ignored.
"And remember: Evil will always prevail, because Good is dumb." -- Spaceballs