Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Missing link to actual scientific article (Score 2) 95

As usual with astronomy articles, it can be found on the arXiv, freely available to all. It goes into much more detail than the article linked in the summary. Here is the abstract:

Recent work has exploited pulsar survey data to identify temporally isolated, millisecond-duration radio bursts with large dispersion measures (DMs). These bursts have been interpreted as arising from a population of extragalactic sources, in which case they would provide unprecedented opportunities for probing the intergalactic medium; they may also be linked to new source classes. Until now, however, all so-called fast radio bursts (FRBs) have been detected with the Parkes radio telescope and its 13-beam receiver, casting some concern about the astrophysical nature of these signals. Here we present FRB 121102, the first FRB discovery from a geographic location other than Parkes. FRB 121102 was found in the Galactic anti-center region in the 1.4-GHz Pulsar ALFA survey with the Arecibo Observatory with a DM = 557.4 \pm 3 pc cm^{-3}, pulse width of 3\; \pm 0.5 ms, and no evidence of interstellar scattering. The observed delay of the signal arrival time with frequency agrees precisely with the expectation of dispersion through an ionized medium. Despite its low Galactic latitude (b = -0.2^{\circ}), the burst has three times the maximum Galactic DM expected along this particular line-of-sight, suggesting an extragalactic origin. A peculiar aspect of the signal is an inverted spectrum; we interpret this as a consequence of being detected in a sidelobe of the ALFA receiver. FRB 121102's brightness, duration, and the inferred event rate are all consistent with the properties of the previously detected Parkes bursts.

Comment Re:RTFA: real engineering is going on (Score 1) 55

There is definitely an unfortunate tendency among slashdotters to be over-cynical towards new technology.

But in the case of D-Wave I think much of the blame lies with the company itself. In the beginning they acted very suspiciously, refusing to let anybody see the insides of their device, and refusing to cooperate with the scientific community, all the while charging millions for devices that that it was unclear whether did anything interesting. During this uncooperative phase, many scientists publically expressed deep skepticism that the D-Wave approach had anything to do with quantum computing, and it was not so strange to think that the whole thing was simply a scam. Especially when it turned out that the results of D-Wave's device could be emulated in faster than real-time using a normal computer.

Since then, D-Wave seems to have turned over a new leaf and become much more cooperative, and I think most people take them seriously now (though there is still much controversy about their approach to quantum computing). But they are still suffering from the bad reputation they earned in the beginning.

Comment Disappointing (Score 5, Informative) 94

I took part in the copyright consultation (along with about 10000 others), and like many other members of the general public I pointed out the need for reducing the scope and duration of copyright, and to actually try to measure what effects copyright has rather than blindly assuming that it will have its intended consequence of increasing the production of works. I also pointed out that much cultural production, perhaps the majority if you count by the number of authors, is currently illegal due to unauthorized use of copyrighted works. This would disappear if the law as it is were consistently enforced, and gives us a glimpse of the cost of the current system.

After reading parts of the leaked white paper, I am disappointed by the European Commission's response. They give lip service to these issues ("the need for an evidence-based approach", for example), but only in passing. In their "way forward" suggestions, they always choose either to do nothing, or to move according to the wishes of large publishers. They also assert, without evidence, that the dynamic, meditum-to-longer-term effect of reducing copyright would lead to a faster rate of obsolecense of copyrighted material, which would then lead to less incentive to create new works. That's stated as if it were self evident, just a single page after they emphasized the need for an evidence-based approach. In fact, I think a stronger case could be made for exactly the opposite conclusion: When copyright doesn't last forever, you have an incentive to create new works to benefit from.

I did not expect much from the consultation, but I hoped that they would at least discuss the issues raised there, and argue against parts they disagreed with, rather than just ignoring them.

Comment Re:Java or Python (Score 1) 415

Humans care a lot whether spaces and tabs are intermingled, as long as you include other people than just the original program author in the set of "humans". Have you ever looked at a program written by somebody who mixed tabs and space for indentation, and who used a different tab size than you? It's completely unreadable! If there is one thing your editor should do for you, it is to make it very visible when tabs and spaces are mixed.

Comment Re:Warp Drive (Score 1) 564

That ten-line program will always do exactly what it was programmed to do, neither more nor less.

Just because you understand and algorithm (or you invented it and implemented it yourself, even) doesn't mean that the algorithm can't produce complex results that you never would have anticipated yourself. Consider the mandelbrot fractal, for example. It is generated by pretty much the simplest algorithm you can imagine, but it still results in a surprising and beautiful structure. Just because something is following an algorithm slavishly doesn't mean it can't result in arbitrarily complex behavior.

The CPU in your computer is always running the same fixed algorithm, as specified through its wiring diagram. It does exactly what the designers at Intel, AMD, ARM etc. designed it to do. Nothing more, nothing less. But it still results in a huge amount of different behaviors - games, word processors, physics simulations, image manipulation - that were not anticipated by the CPU designers. Of course, in the case of a computer, those extra behaviors come in the form of specially crafted data that is fed into the CPU (via its attached memory) by humans. I'm certainly not claiming that your computer has general AI. But my point is that doing "exactly what it's programmed to do, neither more nor less" is not really relevant here.

It may well be possible to create general AI by having a very simple, fixed algorithm operating on a large, dynamic data structure. At a fundamental level, that's how our intelligence works - a simple, fixed algorithm (the laws of physics) operating on the configuration of particles that make up our brains. I don't think such a low-level approach is the most efficient way of going about constructing a general AI, though.

Comment Re:Github overtaken by thuggish government (Score 2) 349

To get that, I think you'll need a distributed peer-to-peer replacement - something like freenet but without the enormous overhead incurred from the secrecy requirements there. Basically, parts of each repository would be stored redundantly on all clients, and these would all take part in push/pull requests etc. There is nothing preventing you from including the other, non-git features of github in such a program also, including bugtracking etc. But building it would not be trivial.

All centralized architectures are vulnerable to this, though as you say, hosts in some countries are more vulnerable than others.

Comment Re:Concerns about online voting (Score 1) 139

I agree with your concerns about selling your vote. But I do think it's possible to design a system that makes vote-selling worthless, though it would be less convenient than normal online voting, and would still involve some physical visits to the voting booth, just not as often. Basically, every N years you would visit the voting locales in person and draw any number of random numbers on slips of paper or similar. You choose one of these random numbers and copy it to a new slip of paper which you put in a sealed envelope and submit just like you would during normal physical voting. That random number is now a pseudonym that can be used to submit online votes for the next period. The other numbers are basically decoys to make selling such numbers harder.

When voting online with this, each vote would be signed with the same number. Crucially, the same response would be given no matter whether the number matches a registered one or not: "your vote has been received" or similar. If somebody is looking over your shoulder, you can just type a wrong code, and your vote will be invalid, but look just as correct to the person observing you.

Since you can draw an arbitrary number of random numbers in the voting locale, but only one is correct, selling these numbers should be worthless too, since you have no way of proving that the number you're selling is the one you actually registered, and a buyer can't ask for "all of them" because he can't know how many numbers you got. Though.. people are lazy, so many people would only keep the one they actually used, I guess.

Of course, this doesn't do anything to solve the problem of botnets etc., which I think is a scary problem, which could put a lot of political power in the hands of botnet operators and those who buy their services.

Comment Mod parent up! (Score 1) 215

Great suggestion, and much better than the one in the discussion thread linked from the summary about refusing to expand names starting with -. I don't think your suggestion would break anything, and it would eliminate the problem. It should be the default.

Comment Re:Gravity? (Score 1) 112

But since its much smaller, the surface gravity would be much greater (you can go deeper into its gravitational well before you reach its surface). The sun has a surface acceleration of 275 m/s^2, or about 28 g. This white dwarf would have a surface acceleration of 3.33 Mm/s^2, or 3.3e5 g, more than ten thousand times higher. Attempt no landings there.

Comment Mod parent down (Score 1) 184

In 1983, the US Army Institute for Professional Development considered TV and Radio to be the second and third most effective means of propaganda after face-to-face interaction. (Nowadays internet and social media would probably be high on the list too.)

What did you check to determine that TV is a "piss-poor way to attempt to shape society"? Did you look at yourself and go "I don't feel like I've had my opinions or feelings manipulated by TV, so it clearly doesn't work"? Many (most) people don't think they are affected by advertising either, yet advertising is so profitable that it is almost everywhere. Our minds have known vulnerabilities in the form of appeal to emotion, subconcious association of unrelated things, etc,, and of course these are being exploited. Sadly, we can't fix these vulnerabilities like we can with software, but at least it helps to be aware of the ways we're being influenced.

Comment Re:Falling funding: Why fusion stays 30 years away (Score 1) 135

Now on top of this all, the energy density of a fusion system is *tiny*, so you need to build *enormous* reactors.

And that's where it falls apart. There is simply no way, under any reasonable development line, that the cost of building the plant, and servicing its debt, can possibly be made up by the electricity coming out. PV, one of the worst power sources in terms of cents/kWh, is currently running at about 15 to 20 cents/kWh. A fusion reactor almost certainly cannot be built that will produce power at under ten times that cost.

I think your post as a whole was very interesting, especially the numbers, but this last section suddenly became very handwavy. Do you have numbers for the last part also? Some of what you say there is quite suprising. Especially the energy density part: Am I completely mistaken in remembering that high power vs. low occupied land area was one of the advantages of fusion? By volume, ITER will produduce about 500 MW for 840 m^3 of reactor, or 0.6 MW/m^3. Area-wise, the ITER facilities will cover 40 hectar, which gives it a power footprint of 1.25 kW/m^2, comparable to coal. And it's unlikely that a reactor with 10 times more power would need ten times the area of the facilities, so that number would get better for larger reactors, especially one's that aren't full-blown research bases.

The main difficulty of a tokamak is plasma confinement, but this is a problem that benefits from the square-cube law because insulation goes up with volume, and the magnetic field curvature goes down. So a larger reactor could use higher densities and higher temperatures than a small one, and would be overall more efficient. So the power density would go up with reactor size.

As you say, an important question is what the final cost per energy will be. Here you basically pull a number of about 200 cents/kWh out from thin air. How do you arrive at that number? What size and type of reactor is is based on? (As a side note, solar power does not look like one of the worst power sources cost-wise in this table.

I'll also note that while progress has been slow compared to the first optimistic guesses, it is still being made.

Slashdot Top Deals

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...