Compare cell phone plans using Wirefly's innovative plan comparison tool ×

Linus on Linux's 25th Birthday ( 105

The creator of Linux, Linus Torvalds, posted his famous message announcing Linux on August 25, 1991, claiming that it was "just a hobby, won't be big and professional like gnu." ZDNet's Steven J. Vaughan-Nichols caught up with Linus Torvalds and talked about Linux's origins in a series of interviews: "SJVN: What's Linux real birthday? You're the proud papa, when do you think it was? When you sent out the newsgroup post to the Minix newsgroup on August 25, 1991? When you sent out the 0.01 release to a few friends?

LT: I think both of them are valid birthdays. The first newsgroup post is more public (August 25), and you can find it with headers giving date and time and everything. In contrast, I don't think the 0.01 release was ever announced in any public setting (only in private to a few people who had shown interest, and I don't think any of those emails survived). These days the way to find the 0.01 date (September 17) is to go and look at the dates of the files in the tar-file that still remains. So, both of them work for me. Or either. And, by the way, some people will argue for yet other days. For example, the earliest public semi-mention of Linux was July 3: that was the first time I asked for some POSIX docs publicly on the minix newsgroup and mentioned I was working on a project (but didn't name it). And at the other end, October 5 was the first time I actually publicly announced a Linux version: 'version 0.02 (+1 (very small) patch already).' So you might have to buy four cakes if you want to cover all the eventualities."
Vaughan-Nichols goes on to pick Linus' brain about what he was doing when he created Linux. In honor of Linux's 25th birthday today, let's all sing happy birthday... 1... 2... 3...
Social Networks

Researchers Create Algorithm That Diagnoses Depression From Your Instagram Feed ( 81

An anonymous reader quotes a report from Inverse: Harvard University's Andrew Reece and the University of Vermont's Chris Danforth crafted an algorithm that can correctly diagnose depression, with up to 70 percent accuracy, based on a patient's Instagram feed alone. After a careful screening process, the team analyzed almost 50,000 photos from 166 participants, all of whom were Instagram users and 71 of whom had already been diagnosed with clinical depression. Their results confirmed their two hypotheses: first, that "markers of depression are observable in Instagram user behavior," and second, that "these depressive signals are detectable in posts made even before the date of first diagnosis." The duo had good rationale for both hypotheses. Photos shared on Instagram, despite their innocent appearance, are data-laden: Photos are either taken during the day or at night, in- or outdoors. They may include or exclude people. The user may or may not have used a filter. You can imagine an algorithm drooling at these binary inputs, all of which reflect a person's preferences, and, in turn, their well-being. Metadata is likewise full of analyzable information: How many people liked the photo? How many commented on it? How often does the user post, and how often do they browse? Many studies have shown that depressed people both perceive less color in the world and prefer dark, anemic scenes and images. The majority of healthy people, on the other hand, prefer colorful things. [Reece and Danforth] collected each photo's hue, saturation, and value averages. Depressed people, they found, tended to post photos that were more bluish, unsaturated, and dark. "Increased hue, along with decreased brightness and saturation, predicted depression," they write. The researchers found that happy people post less than depressed people, happy people post photos with more people in them than their depressed counterparts. and that depressed participants were less likely to use filters. The majority of "healthy" participants chose the Valencia filter, while the majority of "depressed" participants chose the Inkwell filter. Inverse has a neat little chart embedded in their report that shows the usage of Instagram filters between depressed and healthy users.

WikiLeaks Published Rape Victims' Names, Credit Cards, Medical Data ( 301

Joe Mullin, writing for ArsTechnica: Even as WikiLeaks founder Julian Assange sits trapped in the Ecuadorean embassy, the WikiLeaks website continues to publish the secrets of various governments worldwide. But that's not all it's publishing. A report today by the Associated Press highlights citizens who had "sensitive family, financial or identity records" published by the site. "They published everything: my phone, address, name, details," said one Saudi man whose paternity dispute was revealed in documents published by the site. "If the family of my wife saw this... Publishing personal stuff like that could destroy people." One document dump, from Saudi diplomatic cables, held at least 124 medical files. The files named sick children, refugees, and patients with psychiatric conditions. In one case, the cables included the name of a Saudi who was arrested for being gay. In Saudi Arabia, homosexuality is punishable by death. In two other cases, WikiLeaks published the names of teenage rape victims. "This has nothing to do with politics or corruption," said Dr. Nayef al-Fayez, who had a patient with brain cancer whose personal details were published.

Intel Demos A New Robotics Controller Running Ubuntu ( 21

Intel demoed their new robotics compute module this week. Scheduled for release in 2017, it's equipped with various sensors, including a depth-sensing camera, and it runs Ubuntu on a quad-core Atom. Slashdot reader DeviceGuru writes: Designed for researchers, makers, and robotics developers, the device is a self contained, candy-bar sized compute module ready to pop into a robot. It's augmented with a WiFi hotspot, Bluetooth, GPS, and IR, as well as proximity, motion, barometric pressure sensors. There's also a snap-on battery.

The device is preinstalled with Ubuntu 14.04 with Robot Operating System (ROS) Indigo, and can act as a supervisory processor to, say, an Arduino subsystem that controls a robot's low-level functions. Intel demoed a Euclid driven robot running an obstacle avoidance and follow-me tasks, including during CEO Brian Krzanich's keynote (YouTube video).

Intel says they'll also release instructions on how to create an accompanying robot with a 3D printer. This plug-and-play robotics module is a proof-of-concept device -- the article includes some nice pictures -- but it already supports programming in Node.js (and other high-level languages), and has a web UI that lets you monitor performance in real-time and watch the raw camera feeds.

People Ignore Software Security Warnings Up To 90% of the Time, Says Study ( 124

An anonymous reader quotes a report from Phys.Org: A new study from BYU, in collaboration with Google Chrome engineers, finds the status quo of warning messages appearing haphazardly -- while people are typing, watching a video, uploading files, etc. -- results in up to 90 percent of users disregarding them. Researchers found these times are less effective because of "dual task interference," a neural limitation where even simple tasks can't be simultaneously performed without significant performance loss. Or, in human terms, multitasking. For example, 74 percent of people in the study ignored security messages that popped up while they were on the way to close a web page window. Another 79 percent ignored the messages if they were watching a video. And a whopping 87 percent disregarded the messages while they were transferring information, in this case, a confirmation code. For example, Jenkins, Vance and BYU colleagues Bonnie Anderson and Brock Kirwan found that people pay the most attention to security messages when they pop up in lower dual task times such as: after watching a video, waiting for a page to load, or after interacting with a website. For part of the study, researchers had participants complete computer tasks while an fMRI scanner measured their brain activity. The experiment showed neural activity was substantially reduced when security messages interrupted a task, as compared to when a user responded to the security message itself. The BYU researchers used the functional MRI data as they collaborated with a team of Google Chrome security engineers to identify better times to display security messages during the browsing experience.

Startup Aims To Commercialize a Brain Implant To Improve Memory ( 85

the_newsbeagle writes: Neuroscientist Ted Berger has achieved some remarkable feats in his work on an implanted brain prosthetic to boost memory. Working with rats, he recorded the electrical signals associated with a specific memory from one animal's brain, then inserted that signal -- and thus the memory -- into another animal's brain. Working with monkeys, the implanted device enhanced the animals' recall in difficult memory tasks.

Still, it's startling to learn that a startup is ready to commercialize Berger's work, and is trying to build a memory prosthetic for humans suffering from Alzheimer's, brain injuries, and stroke. The new company, named Kernel, will fund human trials and develop electrodes that can record from and stimulate more brain cells.
"An implanted memory prosthetic would have electrodes to record signals during learning, a microprocessor to do the computations, and electrodes that stimulate neurons to encode the information as a memory," writes Eliza Strickland via IEEE Spectrum.

Former Twitter Employees: 'Abuse Problem' Comes From Their Culture Of Free Speech ( 465

Twitter complained of "inaccuracies in the details and unfair portrayals" in an article which described their service as "a honeypot for assholes." Buzzfeed interviewed 10 "high-level" former employees who detailed a company "Fenced in by an abiding commitment to free speech above all else and a unique product that makes moderation difficult and trolling almost effortless". An anonymous Slashdot reader summarizes their report: Twitter's commitment to free speech can be traced to employees at Google's Blogger platform who all went on to work at Twitter. They'd successfully fought for a company policy that "We don't get involved in adjudicating whether something is libel or slander... We'll do it if we believe we are required to by law." One former Twitter employee says "The Blogger brain trust's thinking was set in stone by the time they became Twitter Inc."

Twitter was praised for providing an uncensored voice during 2009 elections in Iran and the Arab Spring, and fought the secrecy of a government subpoena for information on their WikiLeaks account. The former of head of news at Twitter says "The whole 'free speech wing of the free speech party' thing -- that's not a slogan. That's deeply, deeply embedded in the DNA of the company... [Twitter executives] understand that this toxicity can kill them, but how do you draw the line? Where do you draw the line? I would actually challenge anyone to identify a perfect solution. But it feels to a certain extent that it's led to paralysis.

While Twitter now says they are working on the problem, Buzzfeed argues this "maximalist approach to free speech was integral to Twitter's rise, but quickly created the conditions for abuse... Twitter has made an ideology out of protecting its most objectionable users. That ethos also made it a beacon for the internet's most vitriolic personalities, who take particular delight in abusing those who use Twitter for their jobs."

8 Paralyzed Patients Learn To Walk Again Using Virtual Reality ( 17

An anonymous reader quotes a report from Gizmodo: In a new study published in Scientific Reports, eight patients paralyzed with spinal cord injuries exhibited partial restoration of muscle control and sensations in their lower limbs following an extensive training regimen with non-invasive brain-controlled robotics and a virtual reality system. Developed by Duke University neuroscience Miguel Nicolelis and colleagues, the system tapped into the patients' own brain activity to simulate full control of their legs, causing the injured parts of their spinal cord to re-engage. Brain-machine interfaces (BMIs) work by establishing direct communication between the brain and a computer, which then allows patients to control external devices with their thoughts, including prosthetic limbs or exoskeletons. Earlier this year, Nicolelis showed that it was possible for a monkey to control a wheelchair with its mind, though with an implanted brain chip. In the new experiment, the system non-invasively recorded hundreds of brain patterns emitted by the brain, collecting these motor commands from those signals, and then translating them into movements. During the year long experiment, Nicolelis and his team investigated the ways in which BMI-based training could influence the ability of paraplegics to walk using a brain-controlled exoskeleton. To augment this process, they turned to virtual reality, which assisted with visualization and mind-body awareness. While in a virtual reality environment, and when hooked up to the exoskeletons, the patients could see virtual representations of the own bodies, and even receive tactile feedback.

Brains of Overweight People Look Ten Years Older Than Those of Lean Peers, Says Report ( 184

An anonymous reader quotes a report from The Guardian: The brains of people who are obese or overweight appear to have aged an extra 10 years compared to their lean peers from middle age onwards, brain scanning research has revealed. The difference, scientists say, corresponds to a greater shrinkage in the volume of white matter, although they don't know the cause. It might be down to genes causing both brain-shrinking and obesity, or it could be that changes occurring in the brain lead to overeating. Either way, it does not appear to affect cognitive performance. White matter is tissue, composed of nerve fibers, that aids communication between different regions of the brain. The volume of white matter in a human brain increases during youth and then decreases with age for both lean people and those who are overweight or obese. But researchers have discovered that this shrinkage differs depending on a subject's BMI. "The overall message is that brains basically appear to be 10 years older if you are overweight or obese," said Lisa Ronan, first author of the study from the University of Cambridge. Despite a higher BMI being linked to a smaller volume of white matter, it did not appear to have any link to mental prowess, with no difference seen between lean and overweight or obese participants when they were subjected to IQ tests. Scientists from the University of Cambridge and Yale University have published their findings in the journal Neurobiology of Aging.

IBM Creates World's First Artificial Phase-Change Neurons ( 69

An anonymous reader writes from a report via Ars Technica: IBM has created the world's first artificial nanoscale stochastic phase-change neurons and has already created and used a population of 500 of them to process a signal in a similar manner as the brain. Ars Technica reports: "Like a biological neuron, IBM's artificial neuron has inputs (dendrites), a neuronal membrane (lipid bilayer) around the spike generator (soma, nucleus), and an output (axon). There's also a back-propagation link from the spike generator back to the inputs, to reinforce the strength of some input spikes. The key difference is in the neuronal membrane. In IBM's neuron, the membrane is replaced with a small square of germanium-antimony-tellurium (GeSbTe or GST). GST, which happens to be the main active ingredient in rewritable optical discs, is a phase-change material. This means it can happily exist in two different phases (in this case crystalline and amorphous), and easily switch between the two, usually by applying heat (by way of laser or electricity). A phase-change material has very different physical properties depending on which phase it's in: in the case of GST, its amorphous phase is an electrical insulator, while the crystalline phase conducts. With the artificial neurons, the square of GST begins life in its amorphous phase. Then, as spikes arrive from the inputs, the GST slowly begins to crystallize. Eventually, the GST crystallizes enough that it becomes conductive -- and voila, electricity flows across the membrane and creates a spike. After an arbitrary refractory period (a resting period where something isn't responsive to stimuli), the GST is reset back to its amorphous phase and the process begins again." The research has been published via the journal Nature.

Olympic Swimmers 'Certain' To Pick Up Virus From Three Teaspoons of Rio Water ( 280

An anonymous reader writes from a report via The Independent: The Associated Press has released a 16-month-long study that shows just days before the Olympic Games in Rio de Janeiro begin, the waterways in the city are teeming with dangerous viruses and bacteria. The report says both athletes and tourists are at risk of getting ill from the contaminated water. "The first results of the study published over a year ago showed viral levels at up to 1.7 million times what would be considered worrisome in the United States or Europe," reports The Independent. "At those concentrations, swimmers and athletes who ingest just three teaspoons of water are almost certain to be infected with viruses that can cause stomach and respiratory illnesses and, more rarely, heart and brain inflammation -- although whether they actually fall ill depends on a series of factors including the strength of the individual's immune system." Many of the athletes have been taking antibiotics, bleaching oars and donning plastic suits and gloves to prevent illnesses, but antibiotics combat bacterial infections, not viruses. The AP investigation found that infectious adenovirus readings turned up at nearly 90 percent of the test sites over 16 months of testing. What's more is that "the beaches often have levels of bacterial markers for sewage pollution that would be cause for concern abroad -- and sometimes even exceed Rio state's lax water safety standards," reports The Independent.

Nintendo NX Is a Portable Console With Detachable Controllers, Says Report ( 158

An anonymous reader writes from a report via We now have a good idea as to what the Nintendo NX will consist of thanks to a new report from Eurogamer. According to a number of sources, Nintendo's upcoming NX will be a portable, handheld console with detachable controllers. reports: "On the move, NX will function as a high-powered handheld console with its own display. So far so normal -- but here's the twist: we've heard the screen is bookended by two controller sections on either side, which can be attached or detached as required. Then, when you get home, the system can connect to your TV for gaming on the big screen. A base unit, or dock station, is used to connect the brain of the NX -- within the controller -- to display on your TV. NX will use game cartridges as its choice of physical media, multiple sources have also told [Eurogamer]. Another source said the system would run on a new operating system from Nintendo. It won't, contrary to some earlier rumors, simply run on Android. [...] The system will harness Nvidia's powerful mobile processor Tegra. Graphical comparisons with current consoles are difficult due to the vastly different nature of the device -- but once again we've heard Nintendo is not chasing graphical parity. Quite the opposite, it is sacrificing power to ensure it can squeeze all of this technology into a handheld, something which also tallies with earlier reports. Finally, we've heard from one source that NX planning has recently moved up a gear within Nintendo ahead of the console's unveiling, which is currently slated for September. After the confused PR fiasco of the Wii U launch, the company is already settling on a simple marketing message for NX -- of being able to take your games with you on the go."

Can Computerized Brain Training Prevent Dementia? ( 49

"Researchers believe they have found a link between speed-of-processing training and a reduction in cognitive decline among the elderly," reports the New Yorker. An anonymous Slashdot reader quotes an article about how this new long-term study actually contradicts much of the previous science. In October of 2014 a group of more than seventy academics published what they called a consensus statement, asserting that playing brain games had been shown to improve little more than the ability to play brain games... no brain game, nor any drug, dietary supplement, or lifestyle intervention, had ever been shown in a large, randomized trial to prevent dementia...until today, when surprising new results were announced at the Alzheimer's Association annual meeting, in Toronto.
Nearly 3,000 participants with an average age of 73.6 participated in the study, with some receiving "speed of processing" training -- and some later receiving four hours of additional training. "The researchers calculated those who completed at least some of these booster sessions were 48% less likely to be diagnosed with dementia after ten years than their peers in the control group." Signatories of the 2014 consensus statement panning brain games are now calling these new results "remarkable" and "spectacular".

Neuroscientists Have Isolated The Part Of The Brain That Controls Free Will ( 285

An anonymous reader quotes a report from ExtremeTech: Free will might have been the province of philosophers until now, but we've cracked the problem with an fMRI. Neuroscientists from Johns Hopkins report in the journal Attention, Perception, & Psychophysics that they were able to see both what happens in a human brain the moment a free choice is made, and what happens during the lead-up to that decision -- how activity in the brain changes during the deliberation over whether to act. The team devised a novel way to track a participant's focus without using cues or commands, avoiding a Schrodinger's-like dilemma of altering the process of choice by calling attention to it. Participants took positions in MRI scanners, and then were left alone to watch a split screen as rapid streams of colorful numbers and letters scrolled past on both sides. They were asked just to pay attention to one side for a while, then to the other side. When to switch sides, and for how long to look, was entirely up to them. Over the duration of the experiment, the participants glanced back and forth, switching sides dozens of times. In terms of connectivity in the brain, the actual process of switching attention from one side to the other was tightly linked with activity in the parietal lobe, which is sort of the top back quadrant of the brain. Activity during the period of deliberation before a choice took place in the frontal cortex, which engages in reasoning and plans movement. Deliberation also lit up the basal ganglia, important parts of the deep brain that handle motor control, including the initiation of motion. Participants' frontal-lobe activity began earlier than it would have if participants had been cued to shift attention, which demonstrates that the brain was planning a voluntary action rather than merely following an order. A report from Fast Company details how technology is making doctors feel like glorified data-entry clerks.

Do You Have A Living Doppelgänger? ( 142 writes: Folk wisdom has it that everyone has a doppelganger; somewhere out there there's a perfect duplicate of you, with your mother's eyes, your father's nose and that annoying mole you've always meant to have removed. Now BBC reports that last year Teghan Lucas set out to test the hypothesis that everyone has a living double. Armed with a public collection of photographs of U.S. military personnel and the help of colleagues from the University of Adelaide, Lucas painstakingly analyzed the faces of nearly four thousand individuals, measuring the distances between key features such as the eyes and ears. Next she calculated the probability that two peoples' faces would match. What she found was good news for the criminal justice system, but likely to disappoint anyone pining for their long-lost double: the chances of sharing just eight dimensions with someone else are less than one in a trillion. Even with 7.4 billion people on the planet, that's only a one in 135 chance that there's a single pair of doppelgangers. Lucas says this study has provided much-needed evidence that facial anthropometric measurements are as accurate as fingerprints and DNA when it comes to identifying a criminal. "The use of video surveillance systems for security purposes is increasing and as a result, there are more and more instances of criminals leaving their 'faces' at a scene of a crime," says Ms Lucas. "At the same time, criminals are getting smarter and are avoiding leaving DNA or fingerprint traces at a crime scene." But that's not the whole story. The study relied on exact measurements; if your doppelganger's ears are 59mm but yours are 60mm, your likeness wouldn't count. "It depends whether we mean 'lookalike to a human' or 'lookalike to facial recognition software,'" says David Aldous. If fine details aren't important, suddenly the possibility of having a lookalike looks a lot more realistic. It depends on the way faces are stored in the brain: more like a map than an image. To ensure that friends and acquaintances can be recognized in any context, the brain employs an area known as the fusiform gyrus to tie all the pieces together. This holistic 'sum of the parts' perception is thought to make recognizing friends a lot more accurate than it would be if their features were assessed in isolation. Using this type of analysis, and judging by the number of celebrity look-alikes out there, unless you have particularly rare features, you may have literally thousands of doppelgangers. "I think most people have somebody who is a facial lookalike unless they have a truly exceptional and unusual face," says Francois Brunelle has photographed more than 200 pairs of doppelgangers for his I'm Not a Look-Alike project. "I think in the digital age which we are entering, at some point we will know because there will be pictures of almost everyone online.

Alzheimer's Gene Already Shrinking Brain By Age of Three ( 62

schwit1 quotes a report from The Telegraph: The Alzheimer's gene, which dramatically raises the risk of developing dementia, is already affecting carriers by the age of three, shrinking their brains and lowering cognition, a new study suggests. Children who carry the APOEe4 gene mutation, which raises the chance of dementia by 15 fold, were found to do less well in memory, attention and function tests. Areas of the brain affected by Alzheimer's disease, such as the hippocampus and parietal gyri, were also found to be up to 22 percent smaller in volume. [Around 14 percent of people carry the APOEe4 mutation. The research is the first to show that genetic changes which can lead to Alzheimerâ(TM)s are already affecting the brain extremely early in life. Scientists from the University of Hawaii, Yale and Harvard say screening for the gene could help doctors identify which children could benefit from early interventions, such as educational help, preventative treatments, health monitoring and increased exercise. The study involved 1,187 youngsters between the age of three and 20 who took part in genetic tests and brain scans as well as undertaking a series of tests to measure their thinking and memory skills.] According to research from Oregon Health and Science University (OHSU), infrequent use of a computer in later life could be an early sign of reduced cognitive ability.

Linus Torvalds In Sweary Rant About Punctuation In Kernel Comments ( 523

An anonymous reader shares a report on The Register: Linus Torvalds has unleashed a sweary rant on the Linux Kernel Mailing List, labelling some members "brain-damaged" for their preferred method of punctuating comments. "Can we please get rid of the brain-damaged stupid networking comment syntax style, PLEASE?" the Linux Lord asked last Friday. "If the networking people cannot handle the pure awesomeness that is a balanced and symmetric traditional multi-line C style comments, then instead of the disgusting unbalanced crap that you guys use now, please just go all the way to the C++ mode."Torvalds despises the following two comment-punctuation styles (with his comments):/* This is disgusting drug-induced
* crap, and should die
and:/* This is also very nasty
* and visually unbalanced */
Torvalds prefers the following two styles:/* This is a comment */ and:/*
* This is also a comment, but it can now be cleanly
* split over multiple lines


MRI Software Bugs Could Upend Years Of Research ( 95

An anonymous reader shares a report on The Register: A whole pile of "this is how your brain looks like" MRI-based science has been invalidated because someone finally got around to checking the data. The problem is simple: to get from a high-resolution magnetic resonance imaging scan of the brain to a scientific conclusion, the brain is divided into tiny "voxels". Software, rather than humans, then scans the voxels looking for clusters. When you see a claim that "scientists know when you're about to move an arm: these images prove it", they're interpreting what they're told by the statistical software. Now, boffins from Sweden and the UK have cast doubt on the quality of the science, because of problems with the statistical software: it produces way too many false positives. In this paper at PNAS, they write: "the most common software packages for fMRI analysis (SPM, FSL, AFNI) can result in false-positive rates of up to 70%. These results question the validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results."
Social Networks

Multitasking Drains Your Brain's Energy Reserves, Researchers Say ( 106

An anonymous reader quotes an article from Quartz: When we attempt to multitask, we don't actually do more than one activity at once, but quickly switch between them. And this switching is exhausting. It uses up oxygenated glucose in the brain, running down the same fuel that's needed to focus on a task...

"That switching comes with a biological cost that ends up making us feel tired much more quickly than if we sustain attention on one thing," says Daniel Levitin, professor of behavioral neuroscience at McGill University. "People eat more, they take more caffeine. Often what you really need in that moment isn't caffeine, but just a break. If you aren't taking regular breaks every couple of hours, your brain won't benefit from that extra cup of coffee."

Anyone have any anecdotal experiences that back this up?

Slashdot Top Deals