Slashdot is powered by your submissions, so send in your scoop


Forgot your password?

Comment Re:Prior art (Score 1) 150

I don't even have access to read the terms of the grant, since the grant is for my advisor. As far as I know, Google does not have any specific rights to the research, which is why we were able to release the results, algorithm, and code into the public domain, and nobody has ever told me that dissemination or use would be restricted in any way. It's common for compaines like Google to fund public research that they will later have no control over, since this sort of work benefits Google more than it benefits any of its competitors. And no, that benefit doesn't depend on patents; it's simply because Google has access to huge amounts of data, compute power, and machine learning/computer vision expertise.

Comment Re:Prior art (Score 1) 150

they're patenting a specific method of doing so.

There is nothing specific about the methods they're patenting. I just worked on a very similar project, and after reading the patent, I see very little separating what they patented from what we did. Indeed we don't use dimensionality reduction the way they suggest (although we did use it for a while), and we don't provide specific names for the objects we discover (though we have talked about doing so via crowdsourcing). Indeed our work is more recent than the patent filing, but people have been attempting similar things for ages (e.g. [1], [2]...they are very easy to find). Worse, the two papers I cite provide enough detail to actually produce a working system, whereas the patent provides little detail beyond a few references to well-known machine learning and computer vision techniques. And even when they suggest methodology, it's always "maybe we'll use this, maybe not", and further they tend to list several potential methods without any indication that they've researched which ones work.

Comment Re:Interesting concept, but... (Score 2) 104

No, Google Goggles does nothing like this. Google Goggles (and Google search-by-image) is, from the experiments I've done, instance-based image retrieval. That is, it can match objects with exactly the same shape (given a picture of the Eiffel Tower, it will return other images of the Eiffel Tower). However, given a drawing, even a good one, the contour shapes won't match quite well enough, and the algorithm will return garbage. The same can be said for 'deformable objects' like dogs and people.

In fact, I'm quite sure that nothing like this exists. I'm not sure about the actual search engine part of all this, but I did see a talk last fall by one of the researchers who worked on ShadowDraw, which I'm reasonably sure is going to be a component of the final system. The real problem that *they* had to solve was the simple fact that the average person is a horrible, HORRIBLE artist. Ask them to draw a rabbit and for 90% of people, it will come out as a blob that might be an animal, but that's about all you can tell. The algorithms they talked about that actually make the system work as well as it does were actually quite impressive--extremely fast contour indexing, contour combination, converting real photos into convincing sketches--it all sounds easy, but I dare you to actually try implementing it.

Now--and let's see what happens to my karma for saying this--I actually kinda think they deserve a patent for this. Not for coming up with the idea of drawing-based search; that idea is obvious. However, making a system that works as well as ShadowDraw is quite an achievement, and more importantly, Microsoft Research would never have released the algorithm to the public unless it could be patent-protected. Patents in this case aren't about protecting Microsoft's innovation; it's about motivating Microsoft to publish for the sake of other innovators.

Comment Only 1 billion? (Score 1) 130

That's about $13 million. To put that into perspective, the Lunar X-prize robotics challenge offers prize money of $30 million; that doesn't even include team sponsorship. According to Wikipedia, the CMU robotics institute's projects alone cost more than $50 million every year. I crisis and all...but still, a billion yen is not much for robotics research.

Comment Re:Still a long, LONG way to go... (Score 5, Informative) 220

Mod parent up. The linked article (and the MIT press release) are misleading. The closest thing I can find to a peer-reviewed publication by Poon has an abstract is here (no, I can't find anything throught the official EMBC channels--what a disgustingly closed conference):

And there's some background on Poon's goals here:

The goals seem to me to be about studying specific theories about information propagation across synapses as well as studying brain-computer interfaces. They never mention building a model of the entire visual system or any serious artificial intelligence. We have only the vaguest theories about how the visual system works beyond V1, and essentially no idea what properties of the synapse are important to make it happen.

About two years ago, while I was still doing my undergraduate research in neural modeling, I recall that the particular theory they're talking about--spike-timing dependent plasticity--was quite controversial. It might have been simply an artifact of the way the NMDA receptor worked. Nobody seemed to have any cohesive theory for why it would lead to intelligence or learning, other than vague references to the well-established Hebb rule.

Nor is it anything new. Remember this story from ages ago? Remember how well that returned on its promises of creating a real brain? That was spike-timing dependent plasticity as well, and unsurprisingly it never did anything resembling thought.

Slashdot, can we please stop posting stories about people trying to make brains on chips and post stories about real AI research?

Comment Author does not understand information theory... (Score 1) 622

The crux of Krugman's argument seems to be the extraordinarily misleading statement that "A world awash in information is one in which information has very little market value." Krugman has obviously never studied information theory. Yes, our world is 'awash' with information, but that's not because machines are especially good at producing it. Machines are only good at copying it.

Krugman's error stems from his conflation of the two definitions of information. By one definition, the physical number of bits that the human race has managed to store on hard drives, the amount of information the human race has produced has been increasing exponentially. However, this is not useful information, and not the kind of information that requires any serious education to produce. The other definition is from information theory, where information is defined in terms of randomness: here, information is the total number of bits that you need in order to convey a signal in its most compressed form (i.e. the 'random' component of the signal that can't be derived from other parts of the signal). By this definition, the fact that I copy the 100mb file 'a.mp4' from my desktop to my home folder does not mean that I have produced 100mb of information; I have produced at most 64 bytes of information, since that's the number of bytes it took for me to describe the new state of the world.

As for the rest of the article, Krugman argues (correctly, I believe) that any job which requires the production of information will remain strictly in the domain of human beings. However, he seems to forget that most physical goods are just copies of other physical goods, and therefore contain very little information. The production of those goods can generally be replaced by machines.

However, there is still some insight in what Krugman says, though you have to think a bit to realize it. Krugman is actually arguing that educations are only valuable if they teach you how to produce information, and that an education which only teaches you to parrot facts makes you very much like a computer, and very much replaceable by computers. Hence why he needed to use lawyers in his example. I don't think us computer scientists have much to worry about from this argument.

Comment Re:one step closer to drive thru degrees (Score 2, Informative) 371

If you want statistics on Harvard, here they are:

The rest of gives much more information you may find interesting.

The reason for this is that the more students they fail, the better they look.

This is also incorrect. Far more important in the school's rankings are (a) the percent of their admitted class to accept the admissions offer, and (b) a higher number of students who get job offers after graduating. This incentivizes schools to lower failure rates (US News and World Report reports graduation rates and rolls them into rankings because they know it turns off most prospective students), and also to increase grades to make their students' resumes look better.

Comment CPUs and GPUs have different goals (Score 5, Interesting) 129

At least as far as parallel computing goes. CPUs have been designed for decades to handle sequential problems, where each new computation is likely to have dependencies on the results of recent computations. GPUs, on the other hand, are designed for situations where most of the operations happen on huge vectors of data; the reason they work well isn't really that they have many cores, but that the operations for splitting up the data and distributing it to the cores is (supposedly) done in hardware. In a CPU, the programmer has to deal with splitting up the data, and allowing the programmer to control that process makes many hardware optimizations impossible.

The surprising thing in TFA is that Intel is claiming to have done almost as well on a problem that NVIDIA used to tout their GPUs. It really makes me wonder what problem it was. The claim that "performance on both CPUs and GPUs is limited by memory bandwidth" seems particularly suspect, since on a good GPU the memory access should be parallelized.

It's clear that Intel wants a piece of the growing CUDA userbase, but I think it will be a while before any x86 processor can compete with a GPU on the problems that a GPU's architecture was specifically designed to address.

Comment Re:Here it is for 5c (Score 1) 844

What an awful article...this one, and the HIV one that everyone keeps citing. This one starts off with the statement from the director of the institute that created it, "male circumcision is a scientifically proven method for reducing a man's risk of acquiring HIV infection." No real scientist would ever make this claim--science does not prove anything.

It gets worse. The way they conducted the studies (in both cases) was to start off with a large group of men, circumcise half of them, and see who comes back with more infections. There's no way to do blinding here, since you're going to know whether or not you've been circumcised. For example, one confounding factor may simply be that circumcisions hurt--maybe the controlled group just had less sex. Unfortunately, they didn't give any evidence for a mechanism, which makes it somewhat difficult to believe it. (As an aside, the mechanism they suggest is that the foreskin helps the HPV cells enter the cells on the surface of the penis--which suggests that it could prevented by simply pulling the foreskin back for a while after sex).

Another odd part about the study--the Herpes/HPV study was done in Uganda, and the one on HIV was done in Kenya. Of course, applying the results of a study to a population different from the one used in the study is generally a problem, but it's even worse in this case, because this whole conversation started because we believed circumcision stops people from using condoms. Kenya and Uganda are both known for disliking condoms, and so the effects of circumcision reducing the use of condoms has been minimized.

Comment Re:Violating the Constitution is a good reason (Score 1) 1657

I'm interested to hear how you define "lie". I think analytic philosophy has shown that it's nearly impossible to decide whether a statement is "true" or "false" in a completely black-and-white sense.

For me, a lie is any attempt to convince someone else of something that you yourself don't believe. And Bush certainly did this; he knew that the intelligence wasn't nearly as condemning as he wanted America to believe.

Slashdot Top Deals

FORTUNE'S FUN FACTS TO KNOW AND TELL: A cucumber is not a vegetable but a fruit.