Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Andromeda Rules (Score 1) 129

In theory, all we need to do is find a black hole and look with unreasonably high resolution and sensitivity at a point slightly more than 0.5 black hole radii away from its horizon (i.e. 1.5 schwarzchild radii from the center). The black hole acts as a lens, and at that point light is deflected by 180 degrees, letting us look back at ourselves as the earth was 2d years ago, where d is the distance to the hole in lightyears. In fact, by looking even closer to the point 0.5 black hole radii away, you can get to a point where light is deflected by 540 degrees, giving us an even fainter and more distorted image of ourselves a few minutes after the second image, and so on in infinity. In practice, even the first image will be so faint that it probably won't contain even a single photon, and would be washed out by the noise in the environment of the black hole (and between us and it) even if it did. But it's a fun through experiment.

Comment Re: far enough (Score 1) 129

Certainly not the flooding of the Mediterranean that was recorded in Sumerian legends and thence made it's way into Christian myths?

Isn't it unrealistic that the Zanclean flood, that ended the Mediterranean's latest dry phase 5.33 million years ago (that's about 2 million years before the evolution of Australopitthecus afarensis), should be recorded in Sumerian legends? Perhaps you're thinking of the Black Sea deluge, which might have occured somewhere between 7400 BD and 5600 BC (if it happened at all).

Comment Re:Nothing to stop the errors creeping in (Score 1) 200

I have also seen articles decline in quality, but I wonder how big a problem this is. What fraction of articles does this apply to? And what fraction of articles are getting better at the same time? And when an article declines, does it stabilize at some quality level? If so, what is that level? Perhaps one even has something more complicated going on, like a slow overall increase in the level of an article, but with significant short-time fluctuations, just like global temperatures? I read wikipedia extensively, both at work and for fun, and at least in the topics I visit, the average article quality is very high. And in my fields of expertise, the error rate is also very small. I think this indicates that either the fraction of articles that tend to decline in quality are very small, or that the level at which the quality stabilizes is very high.

It's a fundamental problem for them, but one which they can do little about without changing their most basic policies.

I think it's also the reason for Wikipedia's success: More articles recruit more editors, which leads to more articles, etc.. Its predecessor, Nupedia, was written more according to your wishes, but because of its strict focus on experts and quality, it never got the network effects going that have driven Wikipedia's enormous growth. Wikipedia's success, and both in the number of articles and the quality it has achived (and its quality is, on average, pretty good) is quite the miracle, and if you had asked me, or most others, whether "an encyclopedia anyone can edit" could work, I think the answer would have been that it would get bogged down in trolling and sabotage. I guess most people are more constuctive than we give them credit for, and the "armies of editors" approach seems to be a very good strategy.

Lately, Wikipedia's balance seems to have shifted away from the initial inclusionism ("allow imperfect and incomplete articles, someone (not necessarily the same person) will improve them and add sources later") towards deletionism ("if an article isn't good enough (yet), delete it; if information isn't sourced (yet), delete it"). While the intentions behind this is good, namely getting more reliable articles, I think it might be counterproductive, as aggressive deletion policies probably hurt editor recruitment, and hence lowers the pool from which expertise can be drawn. I speculate that part of the reason for the slowdown in Wikipedia's growth the last few years might be this deletionism trend, though the fact that many important topics already have articles probaby is more important.

Comment Re:Actually, a really nice article... (Score 1) 80

The scales on the figures in the article, for example, don't actually go down to 100 GeV -- the left hand edge (log scale) appears to be 1 TeV.

Sorry, my third link was to the wrong article. It should have been this one. That's the one that covers the whole energy range, and which shows the magnitude and location of the W-boson resonance.

SLAC is apparently capable of generating 1/2 an ampere of beam current. That's basically 10^19 electrons/second, which knocks five orders of magnitude off your estimate of 1 event per 300000 years to one per 3 years.

Wow, I was off by a lot! I don't know how noisy environments accelerators are, but I think one would need many times more events per year to be able to detect this. It would be really nice.. But I'm skeptical.

That seems as though it is low enough that IF there were any sort of actual resonance, it might knock another order of magnitude off and get one at least several events per year, maybe more.

From the figure on page 3 in the article, it seems like the W resonance is at 6 PeV/c^2 for a stationary neutrino being impacted by a moving electron, or vice versa. I think that makes sense from a momentum convervation point of view. I haven't done relativistic kinematics in a long while, but the available energy in a collision is E_a^2 = 2 E_1 E_2 + (m_1c^2)^2 + (m_2c^2)^2, so for a massless impactor (m_1=0) hitting an stationary electron (m_2 = m_e, E_2 = m_e c^2) to have 80 GeV/c^2 (W mass) of available energy, we need E_1 = sqrt((E_a^2-E_2^2)/(2E_2)) = 6.3 PeV/c^2. If you go significantly below that energy, there isn't enough available energy to produce a W without violating momentum conservation. So I don't think there's much hope for an accelerator electron getting this resonance with the cosmic neutrino background. :/
Unless I've missed something crucial here. But perhaps we'll have a breaktrough in accelerator technology that will let us reach these levels at some point. If we hit the resonance, the scattering rate will be of the order 1e-31, 13 orders of magnitude higher than what I used in my back-of-the-envelope calculation. But we aren't likely to hit those energies soon, I think.

My Ph.D. advisor (Larry Biedenharn) spent a decade or so looking hard at muon catalyzed fusion so I learned a lot about it then even though my research was in completely different stuff). The primary block point was the huge cost per muon to create muons via e.g. nuclear cross sections and pion decay.

Yes, I've been interested in muon catalyzed fusion myself. It's such a nice idea. A way to reduce the cost of muon production would be very nice.

Comment Re:Actually, a really nice article... (Score 1) 80

Yes, the accelerated beam is a rapidly moving detector. My point is that it is a rapidly moving detector with a woefully tiny volume. I'm no expert on this - I used this page for neutrino cross sections. Both inelastic and elastic scattering seems to be proportional to collision energy.

Neutrinos of several PeV/c^2 are regularly observed in neutrino observatories. At these energies, the Earth is able to act as a somewhat effective neutrino shield, resulting in a significant deficiency in high-enery (>60 TeV) neutrino flux from the direction of the ground (i.e. the direction shielded by the Earth). That says something about how difficult even extremely high energy neutrinos are to detect: Even a detector the size of the Earth lets through quite a bit of them.

It seems you're right that having enough energy to create W-bosons in the collision frame does give the cross section a huge boost, though. There is a really nice figure showing this on page 3 of the article From eV to EeV: Neutrino Cross-Sections Across Energy Scales (note: That figure only shows one of several possible scattering processes - see page 40 for more details). But as the other poster pointed out, to get 100 GeV/c^2 in the collision frame, you need much higher energy when only one particle is moving. In this figure, that point seems to be about 6 PeV. So extragalactic ultra-high-energy neutrinos aren't that far away from that point. But particles in our accelerators have far too low energy.

Comment Re:Actually, a really nice article... (Score 1) 80

I think you are overestimating the scattering cross section of even GeV neutrinos. An electron neutrino with 10 GeV in the rest frame of an electron has a scattering cross section of about 2e-44 m^2. There are about 112 electron neutrinos per cm^3, so the (lab frame) scattering rate is about 2e-44 m^2 * c *112/cm^3 = 6.7e-28/s per electron. The number of protons per beam in LHC is about 1e14. Assuming the number of electrons per beam in SLAC etc. is roughly the same, we get about 1e-13/s scatterings total in the beam. So to get a single scattering one would expect to have to wait about 300000 years (assuming I didn't mess up this back-of-the-envelope calculation).

This is the reason why neutrino detectors are so huge - they have to overcome the tiny scattering rate with a huge number of potential targets. That's why the article is saying one would have to accelerate a whole neutrino detector.

Comment Re:AdBlock/Ghostery/RequestPolicy = inferior (Score 1) 254

AdBlock ("souled-out" 2 Google/Crippled by default)

AbBlock Plus sold out, not AdBlock. Several things call themselves "AdBlock", but as far as I know none of them have sold out. The AdBlock Plus sellout resulted in a fork called AdBlock Edge, which is equivalent to AdBlock Plus, but without the "acceptable" ads "feature".

Comment Re:We're Not (Score 1) 634

What array libraries do you recommend for numerical programing in C++? I'm used to fortran and numpy arrays, which are very similar. Something that allows convenient slicing of multidimensional arrays, and is as fast as fortran arrays, would be really nice to know about.

Comment Re:Wrong question (Score 1) 634

In my experience they are the norm rather than the exception. I work on a program that takes 12 hours on 320 cores to complete (and that's when we still only have a small fraction of our expected amount of data). If it were all written in python, it would take weeks to months instead.

Comment Arrays! (Score 5, Informative) 634

The big thing Fortran has over C is proper support for multidimensional arrays, with powerful slicing operations built into the language. It was the inspiration for numpy arrays. My first languages were C++ and C, but when I do scientific programming, my languages of choice are now python and fortran (with f2py making it very easy to glue them together). Fortran is horrible at text processing, and has an almost absent standard library, but for scientific use, good arrays make up for that - especially when you can use python in the non-performance-critical parts.

C++ has some multidimensional array classes, but none of them are as convenient as fortran arrays. Especially when it comes to slicing. At least that's how it was the last time I checked.

Comment Re:alternative to (C) that protects freedoms? (Score 1) 394

Why should the author work so hard to earn such a meager sum? He would be better off doing a 9-5 job with little risk and making roughly the same income.

$100,000 was just an example. I chose it because it seemed like a quite good salary (2-3 times higher than what I earn as a scientist) if he can manage one of these projects per year. But the author sets the price himself. If he doesn't feel like working for $100,000, then he can ask for $400,000 instead. Of course, the higher his demand, the harder it will be to find enough people to meet it.

What you are suggesting that once pizza sales for the day exceed a certain fixed threshold above the cost, say $1500 (or $500 profit), the pizza owner should give out free pizza to the remaining customers.

No. I'm saying that before making each pizza the pizza house owner should ask to be paid. He sets the price himself so that he covers his costs for running the restaurant + any amount of profits he wants. And only when he's paid does he make the pizza. That sounds quite a bit like a normal pizza restaurant, doesn't it?

In your pizza description, you seem to assume that the pizza chef makes tons of pizzas beforehand, and then tries to get paid afterwards. But that is a paid-after-the-fact model (like our current copyright model), not an upfront payment model.

Just to be clear: The author is not employed by somebody who sets an arbitrary price per creative work that he has to slavishly accept. This isn't the case of the state telling everybody that they must work for $100,000 per book or something. Every author is his own boss, and directly asks his potential audience to finance him, choosing the amount he asks for completely freely. This system does not rely on any invervention from external powers at all. So no special laws to force authors or the public to behave in certain ways.

Comment Re:"Universal Back Door"? (Score 1) 394

There were lots of interesting talks related to this on the 30th chaos communication conference. This one for example. That whole video is well worth a watch (as are the other ones from the conference). Though, the border between backdoor and exploit can be a bit fuzzy: Did these phone companies just make a mistake, or did they willingly collaborate?

Slashdot Top Deals

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...