The fact that no one has found neurons to be dependent upon quantum effects doesn't prove anything. Observing such an effect would probably be extremely hard. Conversely, AFAIK the behaviour of even individual neurons is quite complex and not always predictable. It's not just a "transistor". You could also argue that the consciousness-quantum effect would not necessarily be present when looking at one neuron in isolation - it could be that the effect only "cares" to be there when there's a fully functioning neurological system.
Personally, I believe that the processes human brain, as understood by classical physics, does produce part of human intelligence and behaviour (i.e. I don't believe it is just a mediator/amplifier of some quantum action). Also, there are known processes that are not under conscious control at all, and others that are only partly so (e.g. breathing). So it might be some very complex interaction between the "thing" that provides us with the "inner experience of consciousness" and with the physical/biological layer of the brain, that is required to fully explain all of human experience and behaviour.
1) Firstly, it's hard to see how biology and classical physics can explain consciousness. Note that I'm not talking about human intelligence etc. because there are at least plausible ways to imagine how this could be "implemented" via classical physics. What I'm talking about is "the inner experience" i.e. the experience of existing, the subjective. Isn't it weird that we have such an experience? What would be the substrate of such an experience? Within classical physics, I could perhaps accept a world full of zombies running around seemingly intelligent but without an inner experience. It's not that I don't accept emergent phenomenon in general. I accept that intelligence can result from very simple building blocks. But I don't see how this is true for the subjective experience of existing.
Now this is Slashdot, so a coding analogy would be in order: Understanding consciousness within classical physics is like trying to play a sound on a computer without a sound card - it can't be done no matter what clever programming you use, since the basic building block or "API" isn't there!
Now, the problem to this idea is that it is very hard to measure this "inner experience" for anyone else than the person experiencing it. This is what it means for it to be subjective, and this is what is "magic" about it. But for the individual, the experience is valid and real. And at least to me, there seems to be no way of understanding it within classical physics.
2) There's some experimental evidence. For instance, the element xenon is almost chemically inert. Still, it is a powerful anaesthetic. However, note that as an anesthetic it doesn't just shut off all cells. It doesn't even shut off all of the brain or anything of that sort. Rather, it selectively shuts off consciousness! A person sedated with xenon can still breath, the heart is still beating etc. However, the experience of existing is (temporarily) gone. Now, how can such a primitive one-atom entity as xenon have such selective effects on consciousness? If consciousness was some complex emergent phenomenon, wouldn't it take a complicated molecule to go into the brain and find out exactly what neurons to affect so as to leave the vital functions intact while retaining consciousness? Xenon doesn't appear to be capable of this!
No one really know how xenon does it - but since it is chemically inert it must at least be at the "van der Waal" level. Some experiments indicate it might affect special pockets in certain proteins via at least semi-quantum effect. Given this evidence, it doesn't seem much of a jump to consider these pockets essential to consciousness - perhaps mediating it?
Everyone knows stories of someone who had relatively minor symptoms for a prolonged period of time... that ultimately turned out to be caused by cancer, which was then in a late stage. We are also being reminded every day of how important it is to go to the doctor with symptoms early, where there's a chance for a cure etc. etc. I think most people know that in all likelihood, what they have is nothing. They know the odds it is cancer this particular time around may only be 1 in 10,000. But they also know that 30-40% of the population will get diagnosed with cancer at some point, and they have all heard of some case where it hit early and where the first symptoms were vague. What if THEIR case is the unlucky one? They have only this one life, so if it turns out to be serious, the fact that the risk was low will be little comform - they will have the disease in full-force, perhaps detected at a late stage.
Given all this, isn't it understandable that people might want to have tests, second opinions etc.? Isn't it what they are asked to do by all the cancer campaigns? The problem is we know too much and the whole concept of "risk" has become embodied into everything we do. This means we focus on disease even when we are relatively healthy. But it also enables us to sometimes detect diseases early, and to cure otherwise fatal illnesses (some of those cures being heavily dependent on early detection).
I think short of outlawing screening tests or massive programs to change people's attitude towards life and death (i.e. changing the perception that "a long life is good, a short life is bad" and the almost sense of entitlement of a long life), nothing is going to stop patients from wanting tests and nothing is going to stop doctors from providing it and profiting on it.
1) One possibility is that the USB device 'cheated' and installed a custom device driver when plugged in. Such a device driver could intercept file system calls (sitting a file system filter if on Windows) and could pull off the feat. One problem is that a unique device driver would be required for each platform, at least if the platform is to display the behavior described when writing the file.
2) A more full solution would instead involve the device interpreting the file system structures written by the operating system. It would tell the operating system it was a 500Gig device and the o/s would put a 500G file system on it. However, the device would interpret the file system structures so that it could understand which files were stored where and hence being able to detect the writing of big files and - when it wanted to cycle back upon the beginning of the file - start redirecting write requests to the trailing sectors, so that they would be written to where the beginning of the file is. Such a solution is definitely possible and doesn't require a device driver. But it is file system dependent and probably quite complex to implement. For FAT32 it is probably doable, for NTFS it is probably impractical. Given the large (claimed) size of the device the user would likely format with NTFS. For this reason, I think this is unlikely to be the method employed.
3) A much simpler and probably more likely solution is the following: When the device detects a series of sequential writes that goes beyond the actual capacity of the drive, it simply starts redirecting those writes to the sectors written in the beginning (or, near the beginning) of the current sequential write series. This solution would not be specific to a given o/s or file system, but is dependent on the o/s handling copying large files through sequential writes - or almost sequential, depending on how much logic is put into the detection routine of the device (for instance it could be tolerant to intermittent writes to the file system structures as long as it saw continued sequentiality etc.). The hacker could have analyzed the writing patterns of common operating and file systems to come up with a simple algorithm that would work most all the time.
However, the solution might run into trouble in case of fragmented drives, but given that the purpose is just to convince a potential customer and that such a demonstration would likely take place on a freshly formatted drive, this shortcoming is probably irrelevant.
8 Catfish = 1 Octo-puss