I found the article interesting - though I'm still "digesting" it and have yet to read up supporting material. Perhaps someone would be kind enough to point me at some sources about what the poll results gets used for - and, correct me if I'm wrong in "suspecting" that poll results don't reflect election results (in the USA). TIA
Bro-- do you even slashdot? Something tells me you're new around here.
I think the theories we are talking about are ones that do predict all the phenomena that we observe, just like the Standard Model does, but are in some way more elegant. The situation is, we have an existing theory X that predicts everything we observe, and someone comes along with theory Y that also predicts everything we observe, just like theory X, but some people find theory Y more elegant than X. Now, just because X came along first doesn't make it inherently preferable to theory Y. IMO, theory Y is an equally valid line of inquiry, in just the same way that mathematics is; and ultimately physicists will have to decide whether to spend their time learning theory X or theory Y based on their elegance, ease of use, and so on.
The "falsifiable" theory of science was invented by Karl Popper (IIRC) to distinguish science from things like religion. IMO it's a rather limiting view, and not all philosophers of science accept it as the One True Way. But it doesn't even matter for the present discussion. BOTH theories are falsifisable in that they predict observed phenomena; it's just that they are not differentially falsifiable (I mean, they predict the same thing).
Finally, I'd like to share some background on today’s announcement, because this is the 3rd time the PowerShell team has attempted to support SSH. The first attempts were during PowerShell V1 and V2 and were rejected. Given our changes in leadership and culture, we decided to give it another try and this time, because we are able to show the clear and compelling customer value, the company is very supportive.
The article is full of shit.
It claims that Gates's blog post here here supports LENR, but it does no such thing (although some people in the comments section do mention it).
and found some parts very confusing. E.g. in Fig. 1a, sulfur hydride seems to have critical temperature around 70K at 177GPa, and in Fig. 1b, it seems to have critical temperature of 185K at the same pressure. And the "measurements" in Fig. 4 don't look like measurements, they look like data generated using a mathematical function. Dan
Could you comment on some of the claims in the abstract?
1. Deep learning is a broad set of techniques that uses multiple layers of representation...
Agreed- that's what "deep" implies.
Is multi-scale analysis a primary component of 'deep learning'?
This may be true in vision, but not in general (e.g. in linguistics tasks and in speech, there is usually not a natural notion of scale).
2. "relatively little is understood theoretically about why these techniques are so successful at feature learning and compression.
True... deep learning methods are not very easy to analyze (personally I am skeptical that there is much point in trying very hard to analyze them).
"We construct an exact mapping from the variational renormalization group..." Is this not new, not correct, or is this simply not of much use to deep learning?
I think the closest is to say it's not of much use. I didn't read the paper super carefully (and I'm not a physicist so am not familiar with the renormalization group), but I imagine the analogy is not very close at all and only applies in specific cases, e.g. in convolutional nets or something like that.
The renormalization group theory is so general and powerful, it's had profound impacts on many areas of theoretical and mathematical physics. Do you think this can't or won't impact the field of deep learning? If deep learning has multi-scale analysis at its heart, it appears on the surface that RG should be a good treatment. Have there been attempts to use RG for deep learning aside from the present work?
If the connection is real, it would seem to suggest that perhaps deep learning may have something to offer physics, if it really is "employing a generalized RG-like scheme." Do you have any comment on this?
I haven't read the paper in detail but I just don't think it's plausible that there is a very interesting connection as they are such different things.
To pick a random example, imagine you are a botanist and someone told you there is a connection between hydroelectric dams and oranges. Even if there is a connection, it's probably not something that is going to help you very much, and you probably wouldn't be so excited to read the paper explaining the purported connection.
This article is way overblown. This is not the kind of paper that is likely to attract significant attention in the deep learning community. And the person who they got to say it was important, Ilya Nemenman, is not someone I have heard of.
Move along. Nothing to see here.
In her honky-tonk lagoon
Where I can watch her waltz for free
’Neath her Panamanian moon
An’ I say, “Aw come on now
You must know about my debutante”
An’ she says, “Your debutante just knows what you need
But I know what you want”
Read more: http://www.bobdylan.com/us/son...