The paper that comes to my mind when I read your post is:
Soon, C. S.; Brass, M.; Heinze, H.-J. & Haynes, J.-D. (2008). Unconscious determinants of free decisions in the human brain. Nature Neuroscience, 11, 5, 543-545, doi:10.1038/nn.2112 (article paywalled but a quick google provides an alternative link to the article PDF).
I've a small collection of references for scientific "mind reading" studies I've gathered over the years, so if it's not the one you're thinking of, give me some more details and I might be able to dig it up for you.
Seems that in the "virtual reality" experiment, the rat views the return trip as a 2nd one-way trip, instead of a return trip. This could be explained by the lack of some sense due to the limited inputs (no acceleration, for example) and the rat brain does not really think it has moved.
This is one of the most interesting findings of the study. In the real-world the rats turn themselves round 180 degrees when they reach the end of the tracks. In the virtual world, the environment is turned 180 degrees while the rats remain pointed in the same direction. This suggests that the visual cues provided by the rotation of the virtual environment around the rat are not sufficient to persuade the rat that it is now running in the opposite direction. This gets us a little closer to understanding what sensory inputs the rat is using to determine its location. This study strongly suggests that the rat's perceived direction of motion is what makes the place cells behave differently in the real and virtual worlds.
However, we still don't know whether the rat is using primarily visual cues or primarily self-motion cues. In the visual case, the difference in place cell behaviour between real and virtual worlds might be explained by the rat transforming the visual cues from the side walls to account for its reversed direction of travel in the real world (making the location visually similar from both directions). In the virtual world, the rat might think it is going in the same direction and therefore not transform the visual cues (making the location visually different from each direction). In the self-motion case, the rat could be keeping a "dead reckoning" estimate of position travelled from the ends of the track. In the real world, the rat might increment its position when travelling from left to right and decrement its position when travelling from right to left. In the virtual world the rat might be incrementing its position from the ends in both directions, as its perceived direction of travel might be unchanged. However, this would probably require the rat to reset its perceived position to the "start" of the track when it reaches the "end" of the track in the virtual world, but not in the real world. This may not be plausible.
The fact that over twice the number of place cells are active in the real-world compared to the virtual world is also interesting. The idea is that place cells combine a range of inputs to fire consistently in one spatial location, letting the rat know where it is on an internal "map" of the environment. The fact that so many fewer place cells fire in the absence of cues from certain senses (e.g. vestibular, whisker, smell) could suggest that that the importance of these inputs varies significantly across place cells. Alternatively, it might be possible that multiple place cells encode unique properties of a location as perceived by different senses. I am somewhat familiar with the literature on place cells, but I am not sure whether we know if each location is uniquely coded for by a single place cell. My understanding is that each experiment can only record from a small number of place cells at once, so it would be unlikely for studies to simultaneously record from different place cells that code for the same spatial location (assuming they exist).
IANANBIWWS (I Am Not A Neurocientist But I Work With Some)
From the article: "Each smartphone in the network can operate up to about 100 feet away from its nearest neighbor. VoIP works over up to 5 hops."
By my maths, that gives phone calls over about 500 feet (152 metres). Point to point communication using cheap PMR446 radios would do a better job if the mobile network went down, with a range of up to a few kilometres in open space and a few hundred metres in the city (though channel collisions might be more of an issue than with VOIP over wifi). These are as cheap as £15 for a pair. Heck, I could probably just about shout over 150 metres
I will grant that the key benefit of this approach is that it works with the phone you have, and working with the equipment you have is pretty much the only option for communication for the general populace in an emergency (such as the earthquake in Haiti that motivated this work). However, you would need to have a suitable ad-hoc VOIP system that can run on a local (not connected to the internet) network and ideally connect using mobile phone numbers as VOIP identities (a bit like a distributed version of Viber).
However, the article notes that the mobile infrastructure was still operational, just overwhelmed by sheer weight of traffic. It is therefore also likely that some internet connectivity remained as both often rely on similar backhaul connectivity. In this case, having phones that can connect to the mobile network via wifi access points (e.g. UMA) would also have helped, assuming that the network "crash" was a bandwidth or connection density issue and not a crash of the backend subscriber management systems. Orange in the UK have this technology deployed, but the number of compatible handsets is very low. As pointed out by others, offloading a portion of calls and data over internet connections makes sense for the operators in non-disaster conditions too, reducing contention for limited bandwidth. I for one would like to see UMA technology become standard in all wifi capable smartphones.
HELP!!!! I'm being held prisoner in /usr/games/lib!