Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Submission + - European Commission updates its open source policy (opensource.com)

jenwike writes: The European Commission wants to make it easier for its software developers to submit patches and add new functionalities to open source projects. Contributing to open source communities will be made central to the EC’s new open source policy, expects Pierre Damas, Head of Sector at the Directorate General for IT (DIGIT). "We use a lot of open source components that we adapt and integrate, and it is time that we contribute back.”

Submission + - Sony Reportedly is Using Cyber-Attacks to Keep Leaked Files From Spreading

HughPickens.com writes: Lily Hay Newman reports at Slate that Sony is counterhacking to keep its leaked files from spreading across torrent sites. According to Recode, Sony is using hundreds of computers in Asia to execute a denial of service attack on sites where its pilfered data is available, according to two people with direct knowledge of the matter. Sony used a similar approach in the early 2000s working with an anti-piracy firm called MediaDefender, when illegal file sharing exploded. The firm populated file-sharing networks with decoy files labeled with the names of such popular movies as “Spider-Man,” to entice users to spend hours downloading an empty file. "Using counterattacks to contain leaks and deal with malicious hackers has been gaining legitimacy," writes Newman. "Some cybersecurity experts even feel that the Second Amendment can be interpreted as applying to 'cyber arms'.”

Submission + - Physicist Has Groundbreaking Idea About Why Life Exists

mrspoonsi writes: Why does life exist? Popular hypotheses credit a primordial soup, a bolt of lightning and a colossal stroke of luck. But if a provocative new theory is correct, luck may have little to do with it. Instead, according to the physicist proposing the idea, the origin and subsequent evolution of life follow from the fundamental laws of nature and “should be as unsurprising as rocks rolling downhill.” From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

Submission + - Nanotube Film Could Replace Defective Retinas (gizmag.com)

Zothecula writes: A promising new study suggests that a wireless, light-sensitive, and flexible film could potentially form part of a prosthetic device to replace damaged or defective retinas. The film both absorbs light and stimulates neurons without being connected to any wires or external power sources, standing it apart from silicon-based devices used for the same purpose. It has so far been tested only on light-insensitive retinas from embryonic chicks, but the researchers hope to see the pioneering work soon reach real-world human application.

Submission + - The Fastest Camera Ever Made Captures 100 Billion Frames Per Second 1

Jason Koebler writes: A new imaging technique is able to capture images at 100 billion frames per second—fast enough to watch light interact with objects, which could eventually lead to new cloaking technologies.
The camera was developed by a team at Washington University in St. Louis—for the team's first tests, it was able to visualize laser pulse reflections, photons racing through air and through resin, and "faster-than-light propagation of non-information." It can also be used in conjunction with telescopes and to image optical and quantum communications, according to lead researcher Liang Gao.

Submission + - ISPs Must Take Responsibility For Sony Movie Leaks, UK MP Says (torrentfreak.com)

An anonymous reader writes: As the fallout from the Sony hack continues, who is to blame for the leak of movies including Fury, which has been downloaded a million times? According to the UK Prime Minister's former IP advisor, as "facilitators" web-hosts and ISPs must step up and take some blame.

Mike Weatherley MP, the recent IP advisor to Prime Minister David Cameron, has published several piracy reports including one earlier in the year examining the advertising revenue on pirate sites. He believes that companies with no direct connection to the hack or subsequent leaks should shoulder some blame.

“Piracy is a huge international problem. The recent cyber-attack on Sony and subsequent release of films to illegal websites is just one high-profile example of how criminals exploit others’ Intellectual Property,” Weatherley writes in an email to TF.

“Unfortunately, the theft of these films – and their subsequent downloads – has been facilitated by web-hosting companies and, ultimately, ISPs who do have to step-up and take some responsibility.”

Weatherley doesn’t provide detail on precisely why web-hosts and ISPs should take responsibility for the work of malicious hackers (possibly state-sponsored) and all subsequent fall out from attacks. The theory is that “something” should be done, but precisely what remains elusive.

Comment Re:Single-pixel what? (Score 5, Informative) 81

Ok, let's say that you want to build a 1 "mega-pixel" camera (1000x1000 pixels, for instance). You have the optics but not the sensor array. Instead, you only have a single photo-diode... which is basically a single pixel.

First approach : you decide to scan the image plane with this photo-diode, trading spatial resolution for time. You move the photo-diode to where the first pixel in the top-left corner of the sensor should be, integrate (collect the photons) for some time, then move to the second pixel position. After making 1 million of such movements/integrations, you have fully sampled the image plane and have a complete 1 "mega-pixel" image.
Problem : this is slow as hell, you need to move the photo-diode up to some accuracy, etc.

Second approach : instead of moving the photo-diode you will modulate the incoming signal (photons) and integrate everything to this detector. You take a small video projector and open it to find a component called a DMD which is an array of controllable bistable micro-mirrors. Basically, displaying an image on the video projector is turning this surface as a transmissive gray-scale pattern (note that it is not actually transmitting light, just reflecting). You put it in the image plane (at the position of the sensor array) and you use a lens to focus all of the light coming out of the DMD surface onto the photo-diode.
Now, instead of scanning, you just have to display a pattern consisting of a "black" frame (fully "blocking") except only one "white" pixel ("transparent") and integrate as usual. As you know which patterns was used for each integration and can, as previously, rebuild the image.

Second approach, first improvement : instead of lighting pixel per pixel you can use specific patterns. The basic idea is to integrate photons coming from multiple pixels at the same time and reconstruct with a specific algorithm. The idea is to express the problem as a linear equation A x = y where x is the input image, A is the measurement operator = a matrix representing the system and y is the measured vector. In the previous case, you were measuring pixel per pixel which is equivalent as modelling A as the identity matrix (ones on the main diagonal, zeros everywhere else and so y = x). Imagine now that you use another matrix / another way to combine multiple pixels, such that each row of A is pattern you have to display on the DMD and the matrix row is still square and full-rank (a well defined system). In the end you can still reconstruct x from y with A' y = x (where A' is the inverse of A) and get back your image.
Why would you do this? Well, instead of getting a bunch of photon from a tiny opening you will be measuring many more photons which is a good thing as our real-world detector is noisy. You will thus increase the signal to noise ratio.

Second approach, third improvement : the main problem of the previous system is that, to obtain a 1 mega-pixel image, you still need to do 1 million projections/measurements which is a lot, and makes the whole process slow. But, you know for a fact that images are compressible signals (JPEG is a proof of that) which means that you can represent any 1 mega-pixel image signal into a much smaller vector size. This is because natural images are not random structures and possess some level of coherency = redundancy between pixels. So instead of making as many projection as they are pixels (a square matrix), you will do less, say by a factor between 4 and 10. The matrix A becomes rectangular and you have to use a more complex reconstruction algorithm (non linear, or based on a convex optimization system) which takes into account prior knowledge you would have of natural images (think of it as external constraints that will help you make the system sufficiently well behaved).

This is basically how single-pixel cameras work (with compressive sensing)...

I'll pass for the bonus point.

Submission + - Single Pixel Camera Takes Images Through Breast Tissue 1

KentuckyFC writes: Single pixel cameras are currently turning photography on its head. They work by recording lots of exposures of a scene through a randomising media such as frosted glass. Although seemingly random, these exposures are correlated because the light all comes from the same scene. So its possible to number crunch the image data looking for this correlation and then use it to reassemble the original image. Physicists have been using this technique, called ghost imaging, for several years to make high resolution images, 3D photos and even 3D movies. Now one group has replaced the randomising medium with breast tissue from a chicken. They've then used the single pixel technique to take clear pictures of an object hidden inside the breast tissue. The potential for medical imaging is clear. Curiously, this technique has a long history dating back to the 19th century when Victorian doctors would look for testicular cancer by holding a candle behind the scrotum and looking for suspicious shadows. The new technique should be more comfortable.

Comment "A picture may be worth a thousand words..." (Score 1) 29

Especially considering a 1 mega-pixels image in 8 bits gray-scale. That's 1 MB worth of information. Considering 8 letters in average per word (including the various punctuation characters) and 250 words per page in whatever-16-bits character encoding, the image weighs the same as a book of 200 pages.

Submission + - Ask Slashdot: Easy Programming Environment For Processing Video And Audio?

An anonymous reader writes: Dear Slashdotters: Me and a couple of pals want to test out a few ideas we have for processing video and audio files using code. We are looking for a programming language that is a) uncomplicated to learn b) runs reasonably fast (compiled, not interpreted please) and c) can read and write video and audio files with relative ease. Read/write support for common file formats like AVI, Video For Windows, Quicktime, MP3, WAV would make our job much easier. The icing on the cake would be if the IDE/language/compiler used is free and runs on Windows as well as MacOS (we may try Linux further down the line as well). Any suggestions? Please note that we are looking for a rapid prototyping language that is quick to setup, makes it easy to throw some working video/audio code together, and test it against an array of digital test footage/audio, rather than a language for creating a final consumer release (which would likely be C++, Assembly or similar). The ability to build a basic user interface for our experimental video/audio algos — sliders, buttons, data entry fields — would also be a plus, although we wouldn't be building hugely complex UIs at this stage. And one more bonus question — are some of the visual/node-based audio & video processing environments available, like http://vvvv.org/ any good for this kind of algorithm prototyping? (We want the final algos resulting from the effort available in code or flowchart form). Thanks for any help — Five Anonymous Video/Audio Processing Freaks =)

Slashdot Top Deals

To the systems programmer, users and applications serve only to provide a test load.

Working...