At CES today, Microsoft announced full blown windows coming to ARM. This is a very Apple like move for Microsoft, but without the whole "oh we had this running for 5 years before releasing it". Sounds like we are in for driver incompatibilities a million times worse than the Vista transition. Even worse, given that Windows biggest selling point is legacy application compatibility, requiring all third party applications to be recompiled negates the advantages of a legacy compatible version of windows. Finally, the lack of a strong infrastructure for supporting the transition (universal binaries, system library management) point to the transition being a painful one.
The fundamental idea is that a PUF should produce a unique value for a chip, in a repeatable fashion, with a side effect that modification of the chip will be detectable.
PUFs are of 4 main types -
1. Optical - These are the oldest forms of PUFs. They started with physicists trying to use chips as diffraction gratings. You shine a laser at the silicon vias and record the signature of light. These require depackaging the chip in question and are mostly impractical
2. Silicon - Usually implemented as long delay lines, but are sensitive to environmental conditions (mainly temperature & injected faults) There remains an ongoing research attempt to make these better (less reliant on environmental factors)
3. Coating - These are currently considered one of the best forms of PUFs. The topmost layer of the chip has some embedded metal flakes. The bottom layer of the chip has a capacitance sensor. Since the distribution of the metal flakes is random, the capacitance is random and unique to each chip (the resolution of the capacitance sensor is tuned to ensure this). This method has the added advantage that the minute someone tries to attack the chip, by depackaging it, the capacitance changes and the chips data (usually the secret key for an encryption cipher such as AES/DES) can be wiped. The main problem is that it adds a few extra fab steps , which means it increases the cost. Additionally, the first calibration costs more money to do.
4. Intrinsic - These are the current area of research. In particular for FPGAs. As any hardware designer knows, RAM cells are initalized to random values, but most FPGAs have some small logic which resets them all to zero. If we remove that logic, we have a chip, which has a whole bunch of random numbers, which will usually initialize the same way, based on process variation etc. This technique has been shown for FPGAs and will probably be brought over soon to full scale chips.
When the chip is manufactured, the device creator records the original response of the chip to a series of challenges and calls this reponse vector r'. When a chip is powered up, it energizes the PUF circuitry and records the output into the internal PUF value register(k). Next, when the chip (usually a passive RFID) needs to be authenticated, the external party sends a challenge. The challenge (c) is processed through some encryption mechanism (called f() )using the key (the saved PUF register value) to produce a response(r).(For those keeping track at home, r = f(c,k)). This response is sent back to external party. The external party sends n such requests and compares the received response vector to the expected response vector (r') if r and r' are the same, then the chip is authenticated and work continues.
Of course, like any normal physical phenomenon, there is some variation between any two power ups. Thus, the key might change. In order to compansate for this, the key is stored as a codeword of some code with a long length. Then, for each subsequent power up, the new key is decoded using nearest neighbor decoding as a codeword of the same code. Finally, the distance of the new key and the expected key is stored into a special vector, which is reapplied to key produced at next power up.