KentuckyFC (1144503) writes "Earth's closest white dwarf is called van Maanen 2 and sits 14 light years from here. It was discovered by the Dutch astronomer Adriaan van Maanen in 1917, but it was initially hard to classify. That's because its spectra contains lots of heavy elements alongside hydrogen and helium, the usual components of a white dwarf photosphere. In recent years, astronomers have discovered many white dwarfs with similar spectra and shown that the heavy elements come from asteroids raining down onto the surface of the stars. It turns out that all these white dwarfs are orbited by a large planet and an asteroid belt. As the planet orbits, it perturbs the rocky belt causing asteroids to collide and spiral in towards their parent star. This process is so common that astronomers now use the heavy element spectra as a marker for the presence of extrasolar planets. And a re-analysis of van Maanen's work shows that, in hindsight, he was the first to discover the tell-tale signature of extrasolar planets almost a century ago."
KentuckyFC (1144503) writes "One way to explore the link between quantum mechanics and general relativity is to study the physics that occurs on a small scale in highly curved spacetimes. However, these conditions only occur in the most extreme environments such as at the edge of black holes or in the instants after the Big Bang. But now one physicist has described how it is possible to create curved spacetime in an ordinary quantum optics lab. The idea is based on optical lattices which form when a pair of lasers interfere to create an eggbox-like interference pattern. When ultracold atoms are dropped into the lattice, they become trapped like ping pong balls in an eggbox. This optical trapping technique is common in labs all over the world. However, the ultracold atoms do not stay at a fixed location in the lattice because they can tunnel from one location to another. This tunnelling is a form of movement through the lattice and can be controlled by changing the laser parameters to make tunneling easier or more difficult. Now one physicists has shown that on a large scale, the tunneling motion of atoms through the lattice is mathematically equivalent to the motion of atoms in a quantum field in a flat spacetime. And that means it is possible to create a formal analogue of a curved spacetime by changing the laser parameters across the lattice. Varying the laser parameters over time even simulates the behaviour of gravitational waves. Creating this kind of curved spacetime in the lab won't reveal any new physics but it will allow researchers to study the behaviour of existing laws under these conditions for the first time. That's not been possible even in theory because the equations that describe these behaviours are so complex that they can only be solved in the simplest circumstances."
KentuckyFC (1144503) writes "Machine learning algorithms use a training dataset to learn how to recognise features in images and use this 'knowledge' to spot the same features in new images. The computational complexity of this task is such that the time required to solve it increases in polynomial time with the number of images in the training set and the complexity of the "learned" feature. So it's no surprise that quantum computers ought to be able to rapidly speed up this process. Indeed, a group of theoretical physicists last year designed a quantum algorithm that solves this problem in logarithmic time rather than polynomial, a significant improvement. Now, a Chinese team has successfully implemented this artificial intelligence algorithm on a working quantum computer, for the first time. The information processor is a standard nuclear magnetic resonance quantum computer capable of handling 4 qubits. The team trained it to recognise the difference between the characters '6' and '9' and then asked it to classify a set of handwritten 6s and 9s accordingly, which it did successfully. The team say this is the first time that this kind of artificial intelligence has ever been demonstrated on a quantum computer and opens the way to the more rapid processing of other big data sets--provided, of course, that physicists can build more powerful quantum computers."
KentuckyFC (1144503) writes "Since 2001, crowdfunding sites have raised almost $3 billion and in 2012 alone, successfully funded more than 1 million projects. But while many projects succeed, far more fail. The reasons for failure are varied and many but one of the most commonly cited is the inability to match a project with suitable investors. Now a group of researchers from Yahoo Labs and the University of Cambridge have mined data from Kickstarter to discover how investors choose projects to back. They studied over 1000 projects in the US funded by over 80,000 investors. They conclude that there are two types of backers: occasional investors who tend to back arts-related projects, probably because of some kind of social connection to the proposers; and frequent investors who have a much more stringent set of criteria. Frequent investors tend to fund projects that are well-managed, have high pledging goals, are global, grow quickly, and match their interests. The team is now working on a website that will create a list of the Twitter handles of potential investors given the URL of a Kickstarter project"
KentuckyFC (1144503) writes "Photons have many properties such as their frequency, momentum, spin and orbital angular momentum. But when it comes to quantum teleportation, physicists have only ever been able to to transmit one of these properties at a time. So the possibility of teleporting a complete quantum object has always seemed a distant dream. Now a team of Chinese physicists has worked out how to teleport more than one quantum property. The team has demonstrated it by teleporting both the spin and orbital angular momentum of single photons simultaneously. They point out that there is no reason in principle why the technique cannot be generalised to include other properties as well, such as a photon's frequency, momentum and so on. That's an important step towards teleporting complex quantum objects in their entirety, such as atoms, molecules and perhaps even small viruses."
KentuckyFC (1144503) writes "The human visual system has evolved to recognise people in almost any pose under a vast range of lighting conditions. But abstract art pushes this ability to its limits by distorting the human form. In particular, Cubism seeks to represent three-dimensional objects on a two-dimensional plane by juxtaposing snapshots from different angles. The result is that a Cubist picture contains many ‘fragments of perception’ of the same object. That's why it is often hard for people to recognise the human figures that these pictures contain. Now a group of computer scientists have tested how computer vision algorithms fare at the task of spotting human figures in Cubist art. They compared a variety of different algorithms against humans in trying to spot human figures in 218 Cubist paintings by Picasso. Humans easily outperform all the algorithms at this task. But some algorithms were much better than others. The most successful were based on so-called "deformable parts models" that recognise human figures by looking for body parts rather than the entire form. Interestingly, the team says this backs up various studies by neuroscientists suggesting that the human brain works in a similar way."
KentuckyFC (1144503) writes "One way of predicting the future is to study data about events in the past and build a statistical model that generates the same pattern of data. Statisticians can then use the model to generate data about the future. Now one statistician has taken this art to new heights by predicting the content of the soon-to-be published novels in the Song of Ice and Fire series by George R R Martin. The existing five novels are the basis of the hit TV series Game of Thrones. Each chapter in the existing books is told from the point of view of one of the characters. So far, 24 characters have starred in this way. The statistical approach uses the distribution of characters in chapters in the first five books to predict the distribution in the forthcoming novels. The results suggest that several characters will not appear at all and also throw light on whether one important character is dead or not, following an ambiguous story line in the existing novels. However, the model also serves to highlight the shortcomings of purely statistical approaches. For example, it does not 'know' that characters who have already been killed off are unlikely to appear in future chapters. Neither does it allow for new characters that might appear. Nevertheless, this statistical approach to literature could introduce the process of mathematical modelling to more people than any textbook."
KentuckyFC (1144503) writes "Back in the 1970s, the astronauts from Apollos 12. 14. 15 and 16 set up an array of seismometers on the lunar surface to listen for moonquakes. This array sent back data until 1977 when NASA switched it off. Now astrophysicists are using this lunar seismic data in the hunt for gravitational waves. The idea is that gravitational waves must squeeze and stretch the Moon as they pass by and that at certain resonant frequencies, this could trigger the kind of seismic groans that the array ought to have picked up. However, the data shows no evidence of activity at the relevant frequencies. That's important because it has allowed astronomers to put the strongest limits yet on the strength of gravitational waves in this part of the universe. Earlier this year, the same team used a similar approach with terrestrial seismic data to strengthen the existing limits by 9 orders of magnitude. The lunar data betters this by yet another order of magnitude because there is no noise from sources such as oceans, the atmosphere and plate tectonics. The work shows that good science on gravitational waves can be done without spending the hundreds of millions of dollars for bespoke gravitational wave detectors, such as LIGO, which have yet to find any evidence of the waves either."
KentuckyFC (1144503) writes "Underwater vehicles have never matched the extraordinary agility of marine creatures. While many types of fish can travel at speeds of up to 10 body lengths per second, a nuclear sub can manage a less than half a body length per second. Now a team of researchers has copied a trick used by octopuses to build an underwater robot capable of matching the agility of marine creatures. This trick is the way an octopus expands the size of its head as it fills with water and then squirts it out to generate propulsion. The team copied this by building a robot with a flexible membrane that also expands as it fills with water. The fluid then squirts out through a rear-facing nozzle as the membrane contracts. To the team's surprise, the robot reached speeds of 10 body lengths per second with a peak acceleration of 14 body lengths per second squared. That's unprecedented in an underwater vehicle of this kind. What's more, the peak force experienced by the robot was 30 per cent greater than the thrust generated by the jet. The team think they know why and say the new technique could be used to design bigger subs capable of even more impressive octopus-like feats."
KentuckyFC (1144503) writes "It's not just star systems and galaxies that have habitable zones--regions where conditions are suitable for life to evolve. Astrophysicists have now identified the entire universe's habitable zones. Their approach starts by considering the radiation produced by gamma ray bursts in events such as the death of stars and the collisions between black holes and so on. Astrobiologists have long known that these events are capable of causing mass extinctions by stripping a planet of its ozone layer and exposing the surface to lethal levels of radiation. The likelihood of being hit depends on the density of stars, which is why the centre of galaxies are thought to be inhospitable to life. The new work focuses on the threat galaxies pose to each other, which turns out to be considerable when they are densely packed together. Astronomers know that the distribution of galaxies is a kind of web-like structure with dense knots of them connected by filaments interspersed with voids where galaxies are rare. The team says that life-friendly galaxies are most likely to exist in the low density regions of the universe in the voids and filaments of the cosmic web. The Milky Way is in one of these low density regions with Andromeda too far away to pose any threat. But conditions might not be so life friendly in our nearest knot of galaxies called the Virgo supercluster."
KentuckyFC (1144503) writes "A growing number of police forces around the world are using data on past crimes to predict the likelihood of crimes in the future. These predictions can be made more accurate by combining crime data with local demographic data about the local population. However, this data is time consuming and expensive to collect and so only updated rarely. Now a team of data experts have shown how combing crime data with data collected from mobile phones can make the prediction of future crimes even more accurate. The team used an anonymised dataset of O2 mobile phone users in the London metropolitan area during December 2012 and January 2013. They then used a small portion of the data to train a machine learning algorithm to find correlations between this and local crime statistics in the same period. Finally, they used the trained algorithm to predict future crime rates in the same areas. Without the mobile phone data, the predictions have an accuracy of 62 per cent. But the phone data increases this accuracy significantly to almost 70 per cent. What's more, the data is cheap to collect and can be gathered in more or less real time. Whether the general population would want their data used in this way is less clear but either way, Minority Report-style policing is looking less far-fetched than when the film appeared in 2002."
KentuckyFC (1144503) writes "Pattern recognition is one of the few areas where humans regularly outperform even the most powerful computers. Our extraordinary ability is a result of the way our bodies process visual information. But surprisingly, our brains only do part of the work. The most basic pattern recognition—edge detection, line detection and the detection of certain shapes—is performed by the complex circuitry of neurons in the retina. Now particle physicists are copying this trick to hunt for new particles. A team at CERN has built and tested an artificial retina capable of identifying particle tracks in the debris from particle collisions. And it can do it at the same rate that the LHC smashes particles together, about 800 million collisions per second. In other words, it can sift through the data in real time. The team says the retina outperforms any other particle-detecting device by a factor of 400."
KentuckyFC (1144503) writes "Most research into the origin of life focuses on the messy business of chemistry, on the nature of self-replicating molecules and on the behaviour autocatalytic reactions. Now one theorist says that the properties of information also place important limits on how life must have evolved, without getting bogged down in the biochemical details. The new approach uses information theory to highlight a key property that distinguishes living from non-living systems: their ability to store information and replicate it almost indefinitely. A measure of this is by how much these systems differ from a state of maximum entropy or thermodynamic equilibrium. The new approach is to create a mathematical model of these informational differences and to use it make predictions about how likely it is to find self-replicating molecules in an artificial life system called Avida used to study evolutionary biology. And interestingly, the predictions closely match what researchers have found in practice. The bottom line is that according to information theory, environments favourable to life are unlikely to be unusual."
KentuckyFC (1144503) writes "One of the great mysteries in astrophysics surrounds the origin of ultra-high energy cosmic rays, which can have energies of 10^20 electron volts and beyond. To put that in context, that’s a single proton with the same energy as a baseball flying at 100 kilometres per hour. Nobody knows where ultra-high energy cosmic rays come from or how they get their enormous energies. That's largely because they are so rare--physicists detect them on Earth at a rate of less than one particle per square kilometre per century. So astronomers have come up with a plan to see vastly more ultra high energy cosmic rays by using the Moon as a giant cosmic ray detector. When these particles hit the lunar surface, they generate brief bursts of radio waves that a highly sensitive radio telescope can pick up. No radio telescope on Earth is currently capable of this but astronomers are about to start work on a new one that will be able to pick up these signals for the first time. That should help them finally tease apart the origins of these most energetic particles in the Universe ."
KentuckyFC (1144503) writes "How many photons does it take to form an image? The conventional answer is tens of thousands of photons per pixel, at the very least in an ordinary camera. Now physicists have thrown convention to the wind by creating images using less than one photon per pixel. Their trick is to combine two recently discovered imaging techniques. The first, called heralded imaging, relies on entangled pairs of photons. The idea is to create a pair of photons and use one of them, the herald, to trigger a detector that records the other photon thereby making an image.This screens out almost all background noise. The second technique is known as compressed sensing. This works by assuming the image data has certain statistical properties which allows the image to be formed from far fewer measurements. The team has tested the idea by creating images of a standard USAF resolution target using only 0.2 photons per pixel and of a wasp wing using 0.45 photons per pixel. The technique should be particularly useful for imaging biological subjects that are likely to be damaged by large numbers of photons."