Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Submission + - Hackaday Prize for Trip to Space Nearing Submission Deadline (hackaday.io)

upontheturtlesback writes: The Hackaday Prize asks makers and tinkerers to submit their open source hardware projects and build logs for a chance to win a trip to space. This year's theme is problems yearning for solutions, which might include projects that help provide medical care, conserve water, clear pollution, or promote access to education. With a little over four weeks until the submission deadline, the odds of winning a prize are still very good, with the best product prize (including $100,000 cash) odds currently at just over 30:1 as of this morning (and many other prizes given away regularly for laser cutter time or instruments to help build your project). Last year's semifinalists included a global network of satellite ground stations, embedded hardware security research, a portable software defined radio, a science tricorder, and a 3D-printable Raman spectrometer.

Comment Hololens (Score 2) 102

"Transform your world with holograms. Microsoft HoloLens, together with Windows 10, brings high-definition holograms to life in your world." I don't normally say this, but this is idiotic. Suddenly we're erroneously calling Virtual Reality and Augmented Reality holographic despite the fact that these systems have been around for years? Making a good AR system would be a significant contribution without mislabelling it as a marketing gimmick. A hologram in the physical sense involves recording a light field, and in the popular/science fiction sense it involves projecting three dimensional objects in space so that everyone can see them -- not projecting things into a particular person's eyes through a headset so that only they can see them (which has been done for decades). To the best of my knowledge we still have no idea how to make a three dimensional "holographic" projection in the popular/science fiction sense.

Submission + - An X Prize competition for NASA's 'impossible' EM Drive? (examiner.com)

An anonymous reader writes: The story of the EM Drive, the prototype of a propulsion unit invented by a British scientist named Roger Shawyer and currently being tested by NASA’s Eagleworks Lab at the Johnson Spaceflight Center, has created considerable excitement and controversy in the media. Reactions have ranged from “NASA may have created a warp drive” to “the EM Drive is poppycock.” A proposal has cropped up at the NASASpaceflight.com site where a great deal of discussion has taken place concerning the EM Drive to create an X Prize competition to take development of the technology to the next level.

Comment Bad summary (Score 4, Informative) 100

This is a terrible summary, and should clearly state that this was a joke effort to expose two essentially fake journals (that no one in the field thinks are real) as predatory and accepting papers for money without peer review. The summary makes it sound like this is a big deal or that these might have been important journals, but really as an academic (or anyone with a university email address) you get at least 10 of these offers to publish papers in random fake journals for money in your inbox every day.

For non-academics, these "journals" are basically the difference between a guy in a trench coat coming up to you on the street and offering to "publish" your book for money, and a real and respected publishing house like the MIT Press offering to publish your book after a laborious review process. If a real journal or publisher accepted a paper or book that was fake or had genuine errors, this would be substantial news (and it does happen occasionally that things do get past the reviewers, they're only human), but that is very far from the case here.

Submission + - Arducorder, next open source science tricorder-like device, nears completion (hackaday.io)

upontheturtlesback writes: The Arducorder Mini, an Arduino-compatible pocket-sized handheld sensing tool and the next in line of open source science tricorder-like devices designed by Dr. Peter Jansen, is nearing completion. Where the previous models have included about a dozen sensors spanning atmospheric, electromagnetic, and spatial readings, an exciting video of the new prototype shows this model includes sensors for spectroscopy, low-resolution thermal imaging, and radiation sensing. The development is open with the project build logs and most recent source schematics, board layouts, and firmware available on github. This project is an entry in the Hack a Day Prize for a trip to space.

Comment Re:The obvious solution (Score 1) 348

As a postdoctoral research fellow in artificial intelligence at a large university, and an open source "Gentleman Scientist" in physics and science education through the open source science tricorder project in my evenings (I have two independent educational backgrounds), I think you've overstated the simplicity of things a great deal. I know you probably didn't mean it as such, but frankly the idea that I (as someone who spent 30 years in school to become an expert in my field) should only pursue research as a hobby after somehow becoming independently wealthy is absolutely ridiculous. It takes at least 10 years (4 years of undergrad, at least 5 years of one-on-one training in graduate school, and usually a 3-year postdoc) to take a bright high school graduate and train them to be a research scientist and the beginnings of an expert in a field. That's a huge amount of time and resources committed by a society in a highly competitive environment to some of its brightest individuals, and you're suggesting that afterwards they should simply pursue their research as a self-funded hobby because the society they live in has engaged in massive social program defunding (including education and scientific research, among other things) over the last decade in favor of tax cuts for the ultra-rich? Do you have any idea how much a decade of post-secondary education costs?

While it is true that some research can be done independently by one or two people with little equipment, and that historically some folks in those circumstances have made major advances (like the ones you mention), and other self-funded scientists will undoubtedly continue to in the future, this is exceptionally rare. Even significant progressive research building off the pieces of what came before it usually requires at least a small team of people, and a modest equipment budget. In the past the labs I've been in have had single pieces of fundamental equipment that cost as much as a small house. I do my research for the good of society, and generally for others to use. There is no way I could pursue my academic research on any independent budget that I will ever have. I spent most of my "extra" (non living-expenses) income from my academic job on open source research in my evenings as it is. It's not like $5k purchases a lot of research resources, it's an exceptionally tight and entirely self-funded budget.

You also bring up hackerspaces. I spend a good deal of time (when I'm not working on the open source tricorder project) helping teach folks how to design, make, and build at our local hackerspace. This is a fantastic resource for the community, and it's incredible to see people pick up new skills and walk out with something that they've put together over a day or a month, and every now and again a really interesting engineering start-up comes out of a hackerspace (like Makerbot). That being said, hackerspaces are primarily engineering centered and places for skill sharing making-related skills. I am unaware of a single case of any substantial piece of science coming out of a hackerspace in their entire history of existence -- but even if you could point to a dozen REALLY good papers that had come out of them, worldwide in the past decade, that's the same number of good papers that will come out of a medium-sized academic research institution in a day.

My mentor in grad school used to say that science is inherently a social discipline, and it took me a while to realize what he'd meant. Public research institutions like universities are filled with extremely bright and talented people who are (generally and largely) very good at churning out good and interesting research for exceptionally little cost compared to industry (Academic wages are generally half to a quarter of what they are in industry, it takes a month to write a grant that has any chance of being funded, and the equipment budgets are usually modest). The research in many cases is openly published and available for use, and is only moving more in the direction of open access. The issue here is that as a society we invest a great deal of resources into incredibly bright people who work for relatively little simply because they love what they do and believe in research for public benefit, and are massively defunding basic science research, and making the barrier for entry to (anecdotally, I'm still young) the worst levels the retiring professors can remember.

Comment Re:so long as the duration is... (Score 2) 272

The 235 decibel blasts from these sonic cannons enters the water about every ten seconds, 24 hours a day, for weeks or months on end, per exploration mission. 235 decibels is about a million times louder than standing next to a jet engine. It kills or injures nearby life almost immediately.

The US Navy recently increased sonar exercises without a proper assessment of the risks to marine mammals. The service and the Navy later estimated that the use of sonar during the five year plan will result in the death or injury of 650,000 marine animals. Their own study.

This isn't something you can compare to the noise that your neighbours make, it's essentially the shockwave from a powerful bomb that goes off every 10 seconds for weeks or months and travels hundreds of kilometers in every direction due to the increased conduction of the wave by the density of the fluid (water). This is one of the largest compression waves that humans can generate, and it "hemorrhages in and around the ears" and causes "organ damage and internal injuries similar to decompression sickness". If you intentionally wanted to kill every living mammal in the ocean, there are few things you could do that would accomplish it quicker or more effectively.

I'm not a US citizen, but you should Contact your Congressional Representative and tell them that this won't fly, immediately.

Comment Re:so long as the duration is... (Score 2) 272

The 235 decibel blasts from these sonic cannons enters the water about every ten seconds, 24 hours a day, for weeks or months on end, per exploration mission. 235 decibels is about a million times louder than standing next to a jet engine. It kills or injures nearby life almost immediately.

The US Navy recently increased sonar exercises without a proper assessment of the risks to marine mammals. The service and the Navy later estimated that the use of sonar during the five year plan will result in the death or injury of 650,000 marine animals. Their own study.

This isn't something you can compare to the noise that your neighbours make, it's essentially the shockwave from a powerful bomb that travels hundreds of kilometers in every direction due to the increased conduction of the wave by the density of the fluid (water). This is one of the largest compression waves that humans can generate, and it "hemorrhages in and around the ears" and causes "organ damage and internal injuries similar to decompression sickness".

Comment Re:Interdisciplinary crossover (Score 2) 57

That's really cool! I find it really interesting and elegant to see the same simple model describe the behavior of such disparate systems that, on the surface, look complicated, but can be described by the sum of simple mechanisms.

I agree, the summary was really well written.

That's a good question, about using similar techniques for image processing and object segmentation from a scene. From a cognitive standpoint, neonates rapidly build on this simple model over their first few months of life as they begin to represent things in world-centered rather than retina-centered coordinates, and begin to learn the basic visual features in the environment (sort of like an alphabet of shapes) that objects tend to be constructed out of. I'm not familiar with most of the image segmentation literature, but I think they're working on doing things that are conceptually similar -- having a hierarchy of feature detectors built from low-level features that eventually contain enough features to recognize entire objects.

Comment Interdisciplinary crossover (Score 5, Insightful) 57

This is really interesting and exciting work. In 2010, we showed that nearly this exact algorithm is used by neonates (newborns) to govern their visual attention and eye movements, and it explains much of what we know about newborn visual attention. It's exciting to see that when you essentially parallelize the algorithm with multiple agents that are aware of each other, it becomes an extremely efficient algorithm for resource collection in a completely different field/task. http://www.ncbi.nlm.nih.gov/pu...

Submission + - The USA Science and Engineering Festival attempts to get more kids into STEM

clay_buster writes: Companies, universities, government agencies and NGOs are trying to excite kids about science and engineering with events like http://www.usasciencefestival.... I thought it was great http://joe.blog.freemansoft.co... but are these events enough to get kids excited when there are easier educational and career paths?

Comment Re:And on the far end? (Score 2) 18

If this is an all-optical switch that doesn't require high-powered lasers or other difficult to achieve non-linear optics, wouldn't this have applications for all-optical computing gates, as well? Basically use something like this to construct an all-optical transistor, and have a logic circuit powered by light instead of electricity?

Comment Systematicity, and Fodor & Pylyshyn (Score 5, Informative) 90

I have a recent PhD in neural computation, though from a functional cognitive and language modeling perspective, and not a neuroanotomical modeling perspective -- so it may be a different area than you're interested in. From a high-level perspective, neural computation has moved a lot in terms of scale in the past two decades (simulations can have millions of nodes), and it has moved a lot in terms of modeling the processes of individual neurons and neurochemistry. Very high-level functional mapping work has also moved a good deal with fMRI, EEG, and MEG becoming relatively inexpensive and very common techniques in cognitive experiments. One area that, in my opinion, has moved very little in the past 20 years is the ability for neural networks to learn non-trivial domain-general representations and processes, and to generalize from those representations and processes to novel (untrained) instances. In the late 80s, after connectionism had made return with Rummelhart and McClelland's popularization of the backpropagation algorithm and demonstration of its utility in a number of tasks earlier in the decade, a good deal of the literature demonstrated very basic limitations and failures of these systems to generalize to untrained instances, or to move away from toy problems. Fodor and Pylyshyn's "Connectionism and Cognitive Architecture" is a classic paper from that era, and Pinker wrote a lot language-specific criticisms as well. Stefan Frank has the most recent long-standing research program in this area that I'm aware of, and his earlier papers have good literature reviews that can further help guide ones background reading. There have been some limited demonstrations of systematicity with different architectures (like echo state networks), and comparatively little work on storing representations and processes simultaneously in a network, but so far these are long-standing and fundamental issues that need revitalization. When convincing demonstrations do arise, they'll likely not need more than a desktop to run, as it will be demonstrations in learning algorithms and architectures, not scale. For non-neural folks, classical neural network architectures are essentially very good at pattern matching and classification (e.g. being trained on handwriting, and trying to classify each letter as one of a set of known letters (A-Z) that it's seen many hundreds of instances of before), or things that involve a large set of specific rules (if X then Y). They're much less good at things that involve domain-general computation, that involve learning both representations and processes and storing them in the same system (i.e. let's read this paragraph and summarize it, or answer a question, or let's write a sentence describing a simple scene that I'm seeing). That's not to say that you couldn't make a neural system that did this -- you could sit down and hard-code an architecture that looked something like a von-neumann CPU architecture and program it to play chess or be a word processor, if you really wanted, but the idea is developing a learning algorithm that, by virtue of exposure to the world, will craft the architecture as such. The idea being that, after years of exposure, the world will progressively "program" the computational/representational substrate that is the brain to recognize objects, concepts, words, put them together into simple event representations, and do simple reasoning with them, much like an infant. I hope that helps. Of course all of this is written by someone interested in developmental knowledge representation and language processing, so it may be a completely different question than you'd wanted answered. best wishes.

Comment Re:What's this for? (Score 4, Informative) 41

I realize that not everyone is familiar with spectroscopy, so I'll try and help outline the contributions that this project makes -- which are centrally in terms of size and cost.

Useful chemical classification can occur with an instrument containing as few as one spectral channel (ie. a narrow band filter). Colorimeters use three spectral channels, like a conventional camera, for determining the concentration of analytes. The similarity in the spectral features between the compounds you're analyzing for a given application determines the spectral resolution one needs to meet that performance. In some cases you may need 10, 100, or 1000 spectral channels, and in other applications, many more.

The architecture used for many contemporary slit spectrometers was invented by Fraunhofer in the early 19th century using a diffraction grating, a slit, and some relay optics. There are different architectures that allow you to improve upon this design (like coded aperture spectroscopy, to increase the SNR), or access different spectral regions (such as interferometer based designs for different wavelengths, like FTIR for infrared spectroscopy), but unless you're getting really fancy for visible spectroscopy, the Fraunhofer architecture is the familiar 200-year old architecture that many folks build in a highschool science class, and these work rather well for a variety of applications. This spectrometer also uses (more or less) this architecture.

Spectrometers are generally big, and many are bench-sized instruments. Currently, an inexpensive visible range (350-1000nm) usb lab spectrometer with around 500 spectral channels is around $2k, and about the size of a bunch of iPhone's stacked ontop of each other -- so it's not at all suitable for being embedded in a tiny handheld device (like an open source science tricorder). Of the commercial mini-spectrometers I'm aware of, this open mini spectrometer has a similar number of detector pixels, a similar spectral range, and a similar size. The current spectrograph on the open mini spectrometer appears to have a FWHM that's about two times worse than these systems, and it's SNR is certainly lower, but it also costs an order of magnitude less. It's also completely open, and you're free to improve the spectrograph design to increase the performance, or potentially use signal processing techniques to increase it's effective resolution.

It's not easy to compare this to something like an iPhone with a spectrometer attachment, because it's intended to be an inexpensive but complete spectrometer module rather than a complete spectrometer with a display, so the audience is different and it aims to enable makers and young scientists to build instruments and incorporate these devices in places they otherwise wouldn't be able to. But if you want to do the comparison, I'm not sure what the FWHM and effective spectral resolution would be for an iPhone with a spectrograph attachment (it depends on the spectrograph you're using, of course), but just the phone without a huge spectrograph hanging off of it is about 10 times larger than this, and for the same price you could probably put 50 of these together.

Slashdot Top Deals

One of the chief duties of the mathematician in acting as an advisor... is to discourage... from expecting too much from mathematics. -- N. Wiener

Working...