60581431
submission
KentuckyFC writes:
In 1948, the Bell Labs mathematician and engineer Claude Shannon published The Mathematical Theory of Communication (pdf). In it, he laid out the basic process of communication and formally introduced ideas such as information, the role of transmitters and receivers as well as the idea of a channel and its capacity to carry information. This theory now forms the basis of all digital communication so it's no exaggeration to say that it has been hugely influential. By contrast, no equivalent theory exists for quantum information, despite decades of work by quantum theorists. That could all change now thanks to the work of David Deutsch, a theoretical physicist, who has developed a theory that links classical and quantum information using a deeper theoretical framework. Deutsch's new approach is called constructor theory and it turns the conventional approach to physics on its head. Physicists currently ply their trade by explaining the world in terms of initial conditions and laws of motion. This leads to a distinction between what happens and what does not happen. By contrast, Deutsch’s new fundamental principle is that all laws of physics are expressible entirely in terms of the physical transformations that are possible and those that are impossible. In other words, the laws of physics do not tell you what is possible and impossible, they are the result of what is possible and impossible. So reasoning about the physical transformations that are possible and impossible leads to the laws of physics. He uses this approach to develop a number of principles that all physical laws must follow, both those that are known and those that are unknown. Consequently, constructor theory must be deeper than all known physical theories such as quantum mechanics and relativity. He draws an analogy between this and conservation laws which are deeper than all other physical laws which must follow them. It's too early to say what impact Deutsch's new approach will have. But he has a spectacular record in physics having been a pioneer of quantum computation in the 1980s and one of the chief exponents of the multiverse, both of which have become mainstream ideas.
60500623
submission
KentuckyFC writes:
The history of astronomy is littered with ideas that once seemed incontrovertibly right and yet later proved to be bizarrely wrong. Not least among these are the ancient ideas that the Earth is flat and at the centre of the universe. But there is no shortage of others from the modern era. Now one astronomer has compiled a list of examples of wrong-thinking that have significantly held back progress in astronomy. These include the idea put forward in 1909 that telescopes had reached optimal size and that little would be gained by making them any bigger. Then there was the NASA committee that concluded that an orbiting x-ray telescope would be of little value. This delayed the eventual launch of the first x-ray telescope by half a decade, which went on to discover the first black hole candidate among other things. And perhaps most spectacularly wrong was the idea that other solar systems must be like our own, with Jupiter-like planets orbiting at vast distances from their parent stars. This view probably delayed the discovery of the first exoplanet by 30 years. Indeed, when astronomers did find the first exo-Jupiter, the community failed to recognise it as a planet for six years. As Mark Twain once put it: "It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”
60271371
submission
KentuckyFC writes:
The study of proteins has become one of the hottest topics in science in the last 20 years, and not just for biologists. Researchers have been measuring the electrical properties of proteins for some time, discovering that some of them act like switches in certain circumstances. That's potentially useful but without a robust theoretical model of how these properties arise, nobody has been able to incorporate proteins into real devices. Now electronics engineers have developed the first model that reliably describes the real electrical behaviour of proteins and how it changes when they bond to other molecules. It even predicts the behaviour in new situations. That should make it possible to use proteins in the same way as other electronic components such as transistors, diodes and so on. That's leading to an entirely new field of science called proteotronics in which proteins work seemlessly with other components in electronic devices. First up, an electronic nose based on the olfactory receptor OR-17, a protein found in rats, which behaves like an electronic switch when it detects the presence of aldehydes such as octanal.
60134429
submission
KentuckyFC writes:
In behavioural psychology, the theory of operant conditioning is the notion that an individual’s future behaviour is determined by the punishments and rewards he or she has received in the past. It means that specific patterns of behaviour can be induced by punishing unwanted actions while rewarding others. While the theory is more than 80 years old, it is hard at work in the 21st century in the form of up and down votes--or likes and dislikes--on social networks. But does this form of reward and punishment actually deter unwanted actions while encouraging good behaviour? Now a new study of the way voting influences online behaviour has revealed the answer. The conclusion is that that negative feedback leads to behavioural changes that are hugely detrimental to the community. Not only do authors of negatively-evaluated content contribute more but their future posts are of lower quality and are perceived by the community as such. What's more, these authors are more likely to evaluate fellow users negatively in future, creating a vicious circle of negative feedback. By contrast, positive feedback does not influence authors much at all. That's exactly the opposite of what operant conditioning theory predicts. The researchers have a better suggestion for social networks: "Given that users who receive no feedback post less frequently, a potentially effective strategy could be to ignore undesired behaviour and provide no feedback at all." Would /.-ers agree?
60084731
submission
KentuckyFC writes:
One of the most widely shared tweets in history is Obama's "Four more years (link to picture)", posted after his second presidential election victory and currently retweeted 775,000 times. But how would different wording have influenced this tweet's popularity and the way it spread? It's easy to imagine that there's no way of telling what might have been in such an alternative universe. But a surprising phenomenon on Twitter has allowed data scientists to study this kind of alternative reality and work out the factors that make one tweet more popular than another. It turns out that the twitter stream contains a surprisingly large number of tweets from the same authors, pointing to the same content but with different messages. That's a natural experiment in which factors such as the author, the URL, the number of followers and so on are all held constant while the message varies. By studying these pairs of tweets, researchers can measure how well each performs and then determine which factors contribute to their popularity. These turn out to be things like the amount of information the tweet contains, the language it uses and even whether it includes a request for a retweet. The team has developed an algorithm that predicts which of a pair of tweets is more likely to be successful with greater accuracy than humans. And they've even set up a website where anybody can test their tweet-rating ability and thereby improve their chances of writing the perfect tweet.
60033035
submission
KentuckyFC writes:
There is growing evidence that the centre of the Milky Way contains a mysterious object some 4 million times more massive than the Sun. Many astronomers believe that this object, called Sagittarius A*, is a supermassive black hole that was crucial in the galaxy's birth and formation. The thinking is that about 100 million years after the Big Bang, this supermassive object attracted the gas and dust that eventually became the Milky Way. But there is a problem with this theory--100 million years is not long enough for a black hole to grow so big. The alternative explanation is that Sagittarius A* is a wormhole that connects the Milky Way to another region of the universe or even a another multiverse. Cosmologists have long known that wormholes could have formed in the instants after the Big Bang and that these objects would have been preserved during inflation to appear today as supermassive objects hidden behind an event horizon, like black holes. It's easy to imagine that it would be impossible to tell these objects apart. But astronomers have now worked out that wormholes are smaller than black holes and so bend light from an object orbiting close to them, such as a plasma cloud, in a unique way that reveals their presence. They've even simulated what such a wormhole will look like. No telescope is yet capable of resolving images like these but that is set to change too. An infrared instrument called GRAVITY is currently being prepared for the Very Large Telescope Interferometer in Chile and should be in a position to spot the signature of a wormhole, if it is there, in the next few years.
59855449
submission
KentuckyFC writes:
Random numbers are the lifeblood of many cryptographic systems and demand for them will only increase in the coming years as techniques such as quantum cryptography become mainstream. But generating genuinely random numbers is a tricky business, not least because it cannot be done with a deterministic process such as a computer program. Now physicists have worked out how to use a smartphone camera to generate random numbers using quantum uncertainties. The approach is based on the fact that the emission of a photon is a quantum process that is always random. So in a given unit of time, a light emitter will produce a number of photons that varies by a random amount. Counting the number of photons gives a straightforward way of generating random numbers. The team points out that the pixels in smartphone cameras are now so sensitive that they can pick up this kind of quantum variation. And since a camera has many pixels working in parallel, a single image can generate large quantities of random digits. The team demonstrate the technique in a proof-of principle experiment using the 8 megapixel camera on a Nokia N9 smartphone while taking images of a green LED. The result is a quantum random number generator capable of producing digits at the rate of 1 megabit per second. That's more than enough for most applications and raises the prospect of credit card transactions and encrypted voice calls from an ordinary smartphone that are secured by the laws of quantum physics.
59816981
submission
KentuckyFC writes:
One of the most profound advances in science in recent years is the way researchers from a variety of fields are beginning to formulate the problem of consciousness in mathematical terms, in particular using information theory. That's largely thanks to a relatively new theory that consciousness is a phenomenon which integrates information in the brain in a way that cannot be broken down. Now a group of researchers has taken this idea further using algorithmic theory to study whether this kind of integrated information is computable. They say that the process of integrating information is equivalent to compressing it. That allows memories to be retrieved but it also loses information in the process. But they point out that this cannot be how real memory works otherwise otherwise retrieving memories repeatedly would cause them to gradually decay. By assuming that the process of memory is non-lossy, they use algorithmic theory to show that the process of integrating information must noncomputable. In other words, your PC can never be conscious in the way you are. That's likely to be a controversial finding but the bigger picture is that the problem of consciousness is finally opening up to mathematical scrutiny for the first time.
59785977
submission
KentuckyFC writes:
Last year, astronomers announced that a small ball of ice and rock heading towards the inner Solar System could turn out to be the most eye-catching comet in living memory. They calculated that Comet Ison's orbit would take it behind the Sun but that it would then head towards Earth where it would put on a spectacular display of heavenly fireworks. Sure enough, Ison brightened dramatically as it headed Sunwards. But as astronomers watched on the evening of 28 November, the brightly flaring Ison moved behind the Sun but never emerged. The comet simply disappeared. Now a new analysis of the death of Ison suggests that the comet was doomed long before it reached the Sun. Images from several Sun-observing spacecraft that had a unique view of events, indicate that Ison exhausted its supply of water and other ice in the final flare-ups as it approached the Sun. The new study shows that all that was left in its last hours were a few hundred thousands pebbles glowing brightly as they vapourised in the Sun's heat. In fact, Comet Ison died in full view of the watching hordes of astronomers on Earth who did not realise what they were watching at the time.
59616965
submission
KentuckyFC writes:
One of the main goals of the space program is to spot an Earth-like planet orbiting another star. And by Earth-like, astronomers mean a planet with liquid water, gaseous oxygen and even chlorophyll, or a light-harvesting molecule like it. The biosignatures of these molecules were all observed during the first Earth fly-by in 1990 when the Galileo spacecraft measured the light reflected off Earth as it flew past on its way to Jupiter. But if these biosignatures exist on more distant exoplanets, could we spot them today? Now astronomers have calculated how good the next generation of space telescopes will have to be to pick up these biosignatures of life. They say that gaseous water should be relatively straightforward to pick out and that oxygen will be more challenging. But the spectral signature of chlorophyll-like molecules will be much harder to spot, requiring significantly more sensitivity than is possible today (either that or a great deal of luck). That suggests a plan, they say. The next generation of space telescopes should look for water and oxygen on exoplanets orbiting nearby stars and only then begin the time-consuming and expensive task of looking for chlorophyll on the most promising targets. One spacecraft that might do this is the Advanced Technology Large-Aperture Space Telescope or ATLAST that is currently scheduled for launch in the 2025-2035 time frame.
59534561
submission
KentuckyFC writes:
In June 1972, nuclear scientists at the Pierrelatte uranium enrichment plant in south-east France noticed a strange deficit in the amount of uranium-235 they were processing. That’s a serious problem in a uranium enrichment plant where every gram of fissionable material has to be carefully accounted for. The ensuing investigation found that the anomaly originated in the ore from the Oklo uranium mine in Gabon, which contained only 0.600% uranium-235 compared to 0.7202% for all other ore on the planet. It turned out that this ore was depleted because it had gone critical some 2 billion years earlier, creating a self-sustaining nuclear reaction that lasted for 300,000 years and using up the missing uranium-235 in the process. Since then, scientists have studied this natural reactor to better understand how buried nuclear waste spreads through the environment and also to discover whether the laws of physics that govern nuclear reactions may have changed in the 1.5 billion years since the reactor switched off. Now a review of the science that has come out of Oklo shows how important this work has become but also reveals that there is limited potential to gather more data. After an initial flurry of interest in Oklo, mining continued and the natural reactors--surely among the most extraordinary natural phenomena on the planet-- have all been mined out.
59290225
submission
KentuckyFC writes:
Memes are the cultural equivalent of genes: units that transfer ideas or practices from one human to another by means of imitation. In recent years, network scientists have become increasingly interested in how memes spread, work that has led to important insights into the nature of news cycles, into information avalanches on social networks and so on. But what exactly makes a meme and distinguishes it from other forms of information is not well understood. Now a team of researchers has developed a way to automatically distinguish scientific memes from other forms of information for the first time. Their technique exploits the way scientific papers reference older papers on related topics. They scoured the half a million papers published by Physical Review between 1893 and 2010 looking for common words or phrases. They define an interesting meme as one that is more likely to appear in a paper that cites another paper in which the same meme occurs. In other words, interesting memes are more likely to replicate. They end up with a list of words and phrases that have spread by replication and can also see how this spreading has changed over the last 100 years. The top five phrases are: loop quantum cosmology, unparticle, sonoluminescence, MgB2 and stochastic resonance; all of which are important topics in physics. The team say the technique is interesting because it provides a way to distinguish memes from other forms of information that do not spread in the same way through replication.
59204713
submission
KentuckyFC writes:
Face recognition has come a long way in recent years. In ideal lighting conditions, given the same pose, facial expression etc, it easily outperforms humans. But the real world isn't like that. People grow beards, wear make up and glasses, make strange faces and so on, which makes the task of facial recognition tricky even for humans. A well-known photo database called Labelled Faces in the Wild captures much of this variation. It consists of 13,000 face images of almost 6000 public figures collected off the web. When images of the same person are paired, humans can correctly spot matches and mismatches 97.53 per cent of the time. By comparison, face recognition algorithms have never come close to this. Now a group of computer scientists have developed a new algorithm called GaussianFace that outperforms humans in this task for the first time. The algorithm normalises each face into a 150 x 120 pixel image by transforming it based on five image landmarks: the position of both eyes, the nose and the two corners of the mouth. After being trained on a wide variety of images in advance, it can then compare faces looking for similarities. It does this with an accuracy of 98.52 per cent; the first time an algorithm has beaten human-level performance in such challenging real-world conditions. You can test yourself on some of the image pairs on the other side of the link.
59038463
submission
KentuckyFC writes:
Typeface design is something of an art. For many centuries, this art has been constrained by the materials available to typographers, mainly lead and wood. More recently, typographers have been freed from this constraint with the advent of digital typesetting and the number of typefaces has mushroomed. Verdana, for example, is designed specifically for computer screens. Now a father and son team of mathematicians have devised a number of typefaces based on problems they have studied in computational geometry. For example, one typeface is inspired by the folds and valleys generated by computational origami designs. Another is based on the open problem of “whether every disjoint set of unit disks (gears or wheels) in the plane can be visited by a single taut non-self-intersecting conveyer belt.” Interestingly, several of the new typefaces also serve as puzzles in which messages are the solutions.
58944499
submission
KentuckyFC writes:
Iapetus, Saturn’s third largest moon, was first photographed by the Cassini spacecraft on 31 December 2004. The images created something of a stir. Clearly visible was a narrow, steep ridge of mountains that stretch almost halfway around the moon’s equator. The question that has since puzzled astronomers is how this mountain range got there. Now evidence is mounting that this mountain range is not the result of tectonic or volcanic activity, like mountain ranges on other planets. Instead, astronomers are increasingly convinced that this mountain range fell from space. The latest evidence is a study of the shape of the mountains using 3-D images generated from Cassini data. They show that the angle of the mountainsides is close to the angle of repose, that’s the greatest angle that a granular material can form before it landslides. That’s not proof but it certainly consistent with this exotic formation theory. So how might this have happened? Astronomers think that early in its life, Iapetus must have been hit by another moon, sending huge volumes of ejecta into orbit. Some of this condensed into a new moon that escaped into space. However, the rest formed an unstable ring that gradually spiralled in towards the moon, eventually depositing the material in a narrow ridge around the equator. Cassini’s next encounter with Iapetus will be in 2015 which should give astronomers another chance to study the strangest mountain range in the Solar System.