This happened to me when I tried to update the Wikipedia page about the prosecutor in the Amanda Knox case with a reference to a New Yorker article about some of his previous misdeeds.
Part of the issue is that all university departments have a mix of people, some of whom have skills that are useful in industry and the real world, and some of whom don't, and of course their salaries won't reflect that, they will reflect mostly seniority. So when companies hire away those who are actually doing useful stuff, all that remains is those with outdated skills or those with a very academic approach (e.g. people who are better at writing papers than code). That has a bad effect because it's the ones that are hired away that would have been teaching students the most marketable skills.
I do have first-hand experience of this issues (I'm a research professor at a top-ranked university).
It may be true, on the margin, that H1B workers depress wages for US workers in similar occupations in the short term, but they also help to grow the US economy overall, especially the tech economy, and almost certainly improve living standards for Americans not in that very limited pool. (And they probably have very little effect in the long term, on the US market for tech talent, as they are growing the market by making it more favorable for capital).
I found the article interesting - though I'm still "digesting" it and have yet to read up supporting material. Perhaps someone would be kind enough to point me at some sources about what the poll results gets used for - and, correct me if I'm wrong in "suspecting" that poll results don't reflect election results (in the USA). TIA
Bro-- do you even slashdot? Something tells me you're new around here.
I think the theories we are talking about are ones that do predict all the phenomena that we observe, just like the Standard Model does, but are in some way more elegant. The situation is, we have an existing theory X that predicts everything we observe, and someone comes along with theory Y that also predicts everything we observe, just like theory X, but some people find theory Y more elegant than X. Now, just because X came along first doesn't make it inherently preferable to theory Y. IMO, theory Y is an equally valid line of inquiry, in just the same way that mathematics is; and ultimately physicists will have to decide whether to spend their time learning theory X or theory Y based on their elegance, ease of use, and so on.
The "falsifiable" theory of science was invented by Karl Popper (IIRC) to distinguish science from things like religion. IMO it's a rather limiting view, and not all philosophers of science accept it as the One True Way. But it doesn't even matter for the present discussion. BOTH theories are falsifisable in that they predict observed phenomena; it's just that they are not differentially falsifiable (I mean, they predict the same thing).
Finally, I'd like to share some background on today’s announcement, because this is the 3rd time the PowerShell team has attempted to support SSH. The first attempts were during PowerShell V1 and V2 and were rejected. Given our changes in leadership and culture, we decided to give it another try and this time, because we are able to show the clear and compelling customer value, the company is very supportive.
The article is full of shit.
It claims that Gates's blog post here here supports LENR, but it does no such thing (although some people in the comments section do mention it).
Could you comment on some of the claims in the abstract?
1. Deep learning is a broad set of techniques that uses multiple layers of representation...
Agreed- that's what "deep" implies.
Is multi-scale analysis a primary component of 'deep learning'?
This may be true in vision, but not in general (e.g. in linguistics tasks and in speech, there is usually not a natural notion of scale).
2. "relatively little is understood theoretically about why these techniques are so successful at feature learning and compression.
True... deep learning methods are not very easy to analyze (personally I am skeptical that there is much point in trying very hard to analyze them).
"We construct an exact mapping from the variational renormalization group..." Is this not new, not correct, or is this simply not of much use to deep learning?
I think the closest is to say it's not of much use. I didn't read the paper super carefully (and I'm not a physicist so am not familiar with the renormalization group), but I imagine the analogy is not very close at all and only applies in specific cases, e.g. in convolutional nets or something like that.
The renormalization group theory is so general and powerful, it's had profound impacts on many areas of theoretical and mathematical physics. Do you think this can't or won't impact the field of deep learning? If deep learning has multi-scale analysis at its heart, it appears on the surface that RG should be a good treatment. Have there been attempts to use RG for deep learning aside from the present work?
If the connection is real, it would seem to suggest that perhaps deep learning may have something to offer physics, if it really is "employing a generalized RG-like scheme." Do you have any comment on this?
I haven't read the paper in detail but I just don't think it's plausible that there is a very interesting connection as they are such different things.
To pick a random example, imagine you are a botanist and someone told you there is a connection between hydroelectric dams and oranges. Even if there is a connection, it's probably not something that is going to help you very much, and you probably wouldn't be so excited to read the paper explaining the purported connection.
"There are things that are so serious that you can only joke about them" - Heisenberg