Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Slashdot Deals: Deal of the Day - 6 month subscription of Pandora One at 46% off. ×

Comment Re:OK lets be real (Score 1) 621

Wikipedia is very strict with its 'no defamation of living people' policy. Even if you supply a sourced reference in a reputable places such as a New Yorker article it can get rejected.

This happened to me when I tried to update the Wikipedia page about the prosecutor in the Amanda Knox case with a reference to a New Yorker article about some of his previous misdeeds.

Comment Re:missing option (Score 1) 307

I block all ads, but completely block access to sites that I have found particularly time-wasting:

Comment Re:If they want to make money (Score 1) 137

I think $200k a year is quite high for a university professor's salary, unless for someone very senior (e.g. department head). I think $100 to $180k is the more normal range. For someone who is a well-known researcher in a hot field, they can probably expect compensation in the $300k to $500k range or more if they went to industry (educated guess based on what I know of salaries of people in my field).

Part of the issue is that all university departments have a mix of people, some of whom have skills that are useful in industry and the real world, and some of whom don't, and of course their salaries won't reflect that, they will reflect mostly seniority. So when companies hire away those who are actually doing useful stuff, all that remains is those with outdated skills or those with a very academic approach (e.g. people who are better at writing papers than code). That has a bad effect because it's the ones that are hired away that would have been teaching students the most marketable skills.

I do have first-hand experience of this issues (I'm a research professor at a top-ranked university).

Comment Re:BULL (Score 1) 417

At last someone pointing out that there isn't a fixed amount of work. (Also known as the lump of labor fallacy).

It may be true, on the margin, that H1B workers depress wages for US workers in similar occupations in the short term, but they also help to grow the US economy overall, especially the tech economy, and almost certainly improve living standards for Americans not in that very limited pool. (And they probably have very little effect in the long term, on the US market for tech talent, as they are growing the market by making it more favorable for capital).

Comment Re:and yet (Score 1) 292

I found the article interesting - though I'm still "digesting" it and have yet to read up supporting material. Perhaps someone would be kind enough to point me at some sources about what the poll results gets used for - and, correct me if I'm wrong in "suspecting" that poll results don't reflect election results (in the USA). TIA

Bro-- do you even slashdot? Something tells me you're new around here.

Comment Re:There is no such thing as non-empirical science (Score 1) 364

In this instance I don't think we need to be too tight-assed about the philosophy of science, and what it means for something to be scientific.

I think the theories we are talking about are ones that do predict all the phenomena that we observe, just like the Standard Model does, but are in some way more elegant. The situation is, we have an existing theory X that predicts everything we observe, and someone comes along with theory Y that also predicts everything we observe, just like theory X, but some people find theory Y more elegant than X. Now, just because X came along first doesn't make it inherently preferable to theory Y. IMO, theory Y is an equally valid line of inquiry, in just the same way that mathematics is; and ultimately physicists will have to decide whether to spend their time learning theory X or theory Y based on their elegance, ease of use, and so on.

The "falsifiable" theory of science was invented by Karl Popper (IIRC) to distinguish science from things like religion. IMO it's a rather limiting view, and not all philosophers of science accept it as the One True Way. But it doesn't even matter for the present discussion. BOTH theories are falsifisable in that they predict observed phenomena; it's just that they are not differentially falsifiable (I mean, they predict the same thing).

Comment Re:I wonder (Score 5, Informative) 285

The linked-to blog contains an interesting statement which could be interpreted as bashing Ballmer:

Finally, I'd like to share some background on today’s announcement, because this is the 3rd time the PowerShell team has attempted to support SSH. The first attempts were during PowerShell V1 and V2 and were rejected. Given our changes in leadership and culture, we decided to give it another try and this time, because we are able to show the clear and compelling customer value, the company is very supportive.

Comment Re:Palladium foil with just the right parameters (Score 1) 183

I hate Microsoft as much as all of you, but I think Bill Gates is way too smart to support stuff like this.

The article is full of shit.

It claims that Gates's blog post here here supports LENR, but it does no such thing (although some people in the comments section do mention it).

Comment Re:Can someone explain to me (Score 1) 80

I skimmed the paper at
and found some parts very confusing. E.g. in Fig. 1a, sulfur hydride seems to have critical temperature around 70K at 177GPa, and in Fig. 1b, it seems to have critical temperature of 185K at the same pressure. And the "measurements" in Fig. 4 don't look like measurements, they look like data generated using a mathematical function. Dan

Comment Re:too many words (Score 4, Interesting) 45

Could you comment on some of the claims in the abstract?

1. Deep learning is a broad set of techniques that uses multiple layers of representation...

Agreed- that's what "deep" implies.

Is multi-scale analysis a primary component of 'deep learning'?

This may be true in vision, but not in general (e.g. in linguistics tasks and in speech, there is usually not a natural notion of scale).

2. "relatively little is understood theoretically about why these techniques are so successful at feature learning and compression.

True... deep learning methods are not very easy to analyze (personally I am skeptical that there is much point in trying very hard to analyze them).

"We construct an exact mapping from the variational renormalization group..." Is this not new, not correct, or is this simply not of much use to deep learning?

I think the closest is to say it's not of much use. I didn't read the paper super carefully (and I'm not a physicist so am not familiar with the renormalization group), but I imagine the analogy is not very close at all and only applies in specific cases, e.g. in convolutional nets or something like that.

The renormalization group theory is so general and powerful, it's had profound impacts on many areas of theoretical and mathematical physics. Do you think this can't or won't impact the field of deep learning? If deep learning has multi-scale analysis at its heart, it appears on the surface that RG should be a good treatment. Have there been attempts to use RG for deep learning aside from the present work?

If the connection is real, it would seem to suggest that perhaps deep learning may have something to offer physics, if it really is "employing a generalized RG-like scheme." Do you have any comment on this?

I haven't read the paper in detail but I just don't think it's plausible that there is a very interesting connection as they are such different things.

To pick a random example, imagine you are a botanist and someone told you there is a connection between hydroelectric dams and oranges. Even if there is a connection, it's probably not something that is going to help you very much, and you probably wouldn't be so excited to read the paper explaining the purported connection.

"There are things that are so serious that you can only joke about them" - Heisenberg