If it works that way than it's likely that the blocks form a file will all be contained on the same volume (all of a file will be on the SSD or the HD, but not both). In that case, normal forensics of the volume would work as expected. It depends on how the file allocation tables are written, but it's possible that the volume might be mountable by a linux system.
Here's an approach that would be possible:
- All files are contained on the HDD
- Highly accessed files are copied to the SSD
- The file table on the HDD is marked to say "this file is at that block on the SSD - go read it there"
- If a marked file is written to, then both the HDD and the SSD copy are written to
- There'd probably be some coordination magic, such as the versions on the SSD are checksumed against the versions on the HDD on boot, and the HDD ones take precedence - that way you could fix something off line and it would still work.
If the object is to keep highly read files on the SSD because of the seek and access time advantage, then this approach would do that without killing your ability to work on the filesystem offline. This assumes that we only want to use the SSD for caching files to be read, which is reasonable, as writes to SSDs are slower and are where the wear issues are.
Frankly we won't know what the limitations are until we do forensic examination of the volumes handled by the driver.