Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Warp Drive (Score 1) 564

That ten-line program will always do exactly what it was programmed to do, neither more nor less.

Just because you understand and algorithm (or you invented it and implemented it yourself, even) doesn't mean that the algorithm can't produce complex results that you never would have anticipated yourself. Consider the mandelbrot fractal, for example. It is generated by pretty much the simplest algorithm you can imagine, but it still results in a surprising and beautiful structure. Just because something is following an algorithm slavishly doesn't mean it can't result in arbitrarily complex behavior.

The CPU in your computer is always running the same fixed algorithm, as specified through its wiring diagram. It does exactly what the designers at Intel, AMD, ARM etc. designed it to do. Nothing more, nothing less. But it still results in a huge amount of different behaviors - games, word processors, physics simulations, image manipulation - that were not anticipated by the CPU designers. Of course, in the case of a computer, those extra behaviors come in the form of specially crafted data that is fed into the CPU (via its attached memory) by humans. I'm certainly not claiming that your computer has general AI. But my point is that doing "exactly what it's programmed to do, neither more nor less" is not really relevant here.

It may well be possible to create general AI by having a very simple, fixed algorithm operating on a large, dynamic data structure. At a fundamental level, that's how our intelligence works - a simple, fixed algorithm (the laws of physics) operating on the configuration of particles that make up our brains. I don't think such a low-level approach is the most efficient way of going about constructing a general AI, though.

Comment Re:Github overtaken by thuggish government (Score 2) 349

To get that, I think you'll need a distributed peer-to-peer replacement - something like freenet but without the enormous overhead incurred from the secrecy requirements there. Basically, parts of each repository would be stored redundantly on all clients, and these would all take part in push/pull requests etc. There is nothing preventing you from including the other, non-git features of github in such a program also, including bugtracking etc. But building it would not be trivial.

All centralized architectures are vulnerable to this, though as you say, hosts in some countries are more vulnerable than others.

Comment Re:Concerns about online voting (Score 1) 139

I agree with your concerns about selling your vote. But I do think it's possible to design a system that makes vote-selling worthless, though it would be less convenient than normal online voting, and would still involve some physical visits to the voting booth, just not as often. Basically, every N years you would visit the voting locales in person and draw any number of random numbers on slips of paper or similar. You choose one of these random numbers and copy it to a new slip of paper which you put in a sealed envelope and submit just like you would during normal physical voting. That random number is now a pseudonym that can be used to submit online votes for the next period. The other numbers are basically decoys to make selling such numbers harder.

When voting online with this, each vote would be signed with the same number. Crucially, the same response would be given no matter whether the number matches a registered one or not: "your vote has been received" or similar. If somebody is looking over your shoulder, you can just type a wrong code, and your vote will be invalid, but look just as correct to the person observing you.

Since you can draw an arbitrary number of random numbers in the voting locale, but only one is correct, selling these numbers should be worthless too, since you have no way of proving that the number you're selling is the one you actually registered, and a buyer can't ask for "all of them" because he can't know how many numbers you got. Though.. people are lazy, so many people would only keep the one they actually used, I guess.

Of course, this doesn't do anything to solve the problem of botnets etc., which I think is a scary problem, which could put a lot of political power in the hands of botnet operators and those who buy their services.

Comment Mod parent up! (Score 1) 215

Great suggestion, and much better than the one in the discussion thread linked from the summary about refusing to expand names starting with -. I don't think your suggestion would break anything, and it would eliminate the problem. It should be the default.

Comment Re:Gravity? (Score 1) 112

But since its much smaller, the surface gravity would be much greater (you can go deeper into its gravitational well before you reach its surface). The sun has a surface acceleration of 275 m/s^2, or about 28 g. This white dwarf would have a surface acceleration of 3.33 Mm/s^2, or 3.3e5 g, more than ten thousand times higher. Attempt no landings there.

Comment Mod parent down (Score 1) 184

In 1983, the US Army Institute for Professional Development considered TV and Radio to be the second and third most effective means of propaganda after face-to-face interaction. (Nowadays internet and social media would probably be high on the list too.)

What did you check to determine that TV is a "piss-poor way to attempt to shape society"? Did you look at yourself and go "I don't feel like I've had my opinions or feelings manipulated by TV, so it clearly doesn't work"? Many (most) people don't think they are affected by advertising either, yet advertising is so profitable that it is almost everywhere. Our minds have known vulnerabilities in the form of appeal to emotion, subconcious association of unrelated things, etc,, and of course these are being exploited. Sadly, we can't fix these vulnerabilities like we can with software, but at least it helps to be aware of the ways we're being influenced.

Comment Re:Falling funding: Why fusion stays 30 years away (Score 1) 135

Now on top of this all, the energy density of a fusion system is *tiny*, so you need to build *enormous* reactors.

And that's where it falls apart. There is simply no way, under any reasonable development line, that the cost of building the plant, and servicing its debt, can possibly be made up by the electricity coming out. PV, one of the worst power sources in terms of cents/kWh, is currently running at about 15 to 20 cents/kWh. A fusion reactor almost certainly cannot be built that will produce power at under ten times that cost.

I think your post as a whole was very interesting, especially the numbers, but this last section suddenly became very handwavy. Do you have numbers for the last part also? Some of what you say there is quite suprising. Especially the energy density part: Am I completely mistaken in remembering that high power vs. low occupied land area was one of the advantages of fusion? By volume, ITER will produduce about 500 MW for 840 m^3 of reactor, or 0.6 MW/m^3. Area-wise, the ITER facilities will cover 40 hectar, which gives it a power footprint of 1.25 kW/m^2, comparable to coal. And it's unlikely that a reactor with 10 times more power would need ten times the area of the facilities, so that number would get better for larger reactors, especially one's that aren't full-blown research bases.

The main difficulty of a tokamak is plasma confinement, but this is a problem that benefits from the square-cube law because insulation goes up with volume, and the magnetic field curvature goes down. So a larger reactor could use higher densities and higher temperatures than a small one, and would be overall more efficient. So the power density would go up with reactor size.

As you say, an important question is what the final cost per energy will be. Here you basically pull a number of about 200 cents/kWh out from thin air. How do you arrive at that number? What size and type of reactor is is based on? (As a side note, solar power does not look like one of the worst power sources cost-wise in this table.

I'll also note that while progress has been slow compared to the first optimistic guesses, it is still being made.

Comment Falling funding: Why fusion stays 30 years away (Score 5, Informative) 135

It's common to hear someone say that "fusion power was 30 years away in the seventies, it's 30 years away now, and it will stay 30 years away"" or similar, and sadly, there is some truth to that (though perhaps it's 30 years now (estimated time for the DEMO full power-plant is 2033)). I think one of the reasons is that funding keeps decreasing, far below the optimistic projections of the 70s. The MIT fusion project made this graph to illustrate: https://i.imgur.com/sjH5r.jpg

It's a bit like when you're downloading a file, and while the download keeps making progress, the estimated time left stays put because the download speed keeps going down. I've had that happen a few times, and it requires an exponentially falling download speed. With fusion, the situation isn't quite that bad, but when you consider the sort of funding levels people were imagining before, it isn't surprising that they thought we would have fusion power by the year 2000.

One interesting way of putting this is to say that fusion power isn't a constant amount of time away, but about 50 billion dollars of funding away. To put those 50 billion dollars in context, fossil fules have received 594 billion dollars in subsidies in the USA since 1950. So partially fusion is difficult, and partially we're not trying very hard.

Comment Re:Free stuff sucks (Score 1) 221

Lets be honest, if copyright did not exist and people were all expected to give everything away for free and starve

It's hard to be honest about what would happen if copyright didn't exist, because we don't know. But we do know that not having copyright doesn't mean that one has to "give everything away for free and starve". Many other alternative means of compensation have been proposed, but we haven't really tested any of them seriously, which is why we aren't sure how it would compare to the current copyright system, quality and amount-wise.

My favorite alternative means of compensation is up-front payment, like we have everywhere else in society. This model has gained quite a bit of steam lately through Kickstarter. Basically, the author asks for the full payment for his work before he performs it, rather than extracting it gradually over years afterwards. The author creates a Kickstarter page detailing his plan for, say, a new work, with some information about what it would be about, and states a price he wants for writing it (say 50,000€), possibly with some stretch goals (bonus chapter after 100,000€, for example). Potential readers then choose how much money they want to commit. Once enough money to reach the author's price has been reached, he gets the money, and starts working. If too much time passes (time-limit is commonly 90 days with Kickstarter) without the goal being reached, then the potential readers get their money back, and the author must try some other approach.

The advantage of this approach is that since the author has already been paid before he does the work, he does not need to control copying: copies are free, and can be shared freely. The more copies are shared, and the more people who enjoy his work, the easier it will be for him to gather money for his work. What is today called piracy would now just be free advertisement.

The disadvantage of this system is that it will be hard for unknown authors to find people willing to fund them. Probably, their first book would need to be written for free in order to get enough interested readers for this approach to work. On the other hand, in practice authors already write their first book for free under the current system (they need something to show the editor in order to be funded), so this is not a serious disadvantage.

Projects of more than $1,000,000 are regularly funded through Kickstarter, and more than 50,000 projects have been funded during the 4 years since its founding. So a Kickstarter-inspired model of up-front payment really looks like it could work, and I think it's worth a try.

Comment Re:i applaud the effort (Score 2) 66

I recommend youtube-dl. It's an easy-to-use open source command line tool for downloading videos from youtube and many other sites. It's a part of the package repositories of most linux distributions also. I usually start a download (youtube-dl link-to-video-page), and then immediately point mplayer at the in-progress file. So there's no delay compared to watching it in the browser, but seeking is much faster, and you get to use a decent player. And if the connection is slow, you just wait a bit.

If you prefer not to use a command-line tool, there are firefox extensions that do this kind of thing, like netvideohunter and downloadhelper, but they are a bit sleazy and support fewer sites, I think. They also can't be automated the way youtube-dl can.

Comment Re:Armchair astrophysics question (Score 1) 129

From the Wikipedia article

Estimating the exact rate at which GRBs occur is difficult, but for a galaxy of approximately the same size as the Milky Way, the expected rate (for long-duration GRBs) is about one burst every 100,000 to 1,000,000 years. Only a small percentage of these would be beamed towards Earth. Estimates of rate of occurrence of short-duration GRBs are even more uncertain because of the unknown degree of collimation, but are probably comparable

So they are mostly seen very far away due to volume effects. There are far more galaxies >100 Mly away than galaxies 100 Mly away, for example.

Slashdot Top Deals

Today is a good day for information-gathering. Read someone else's mail file.

Working...