Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Double Irish (Score 1) 825

No other country has this odd view, instead, money earned abroad is taxed abroad.

The problem is that quite often the money earned abroad isn't actually being taxed at all (or at an extremely low rate). Things like the Double Irish actually prevent anyone from taxing them (or taxing a small portion of the total income), using various legal loopholes, even if the income would normally be taxable. So you can have goods produced in one country, sold in another, and never be taxed anywhere (and in fact the company may well take a deduction on business expenses from production, or other such nonsense). That's the problem here: companies are using loopholes to earn money in countries and not pay taxes on it at all. It's legalized tax evasion.

Dual US/British citizen and earning money in Britain? Great, you'll be paying both UK and US income tax on that!

If and only if the British taxes are less than the taxes you'd be paying in the US, and then only up to the difference. Or you can take a $97,000 dollar exclusion on foreign income (so if you make less than that, you pay nothing). Really, the whole system is only intended to make sure that rich people and corporations that have the money and resources to take advantage of loopholes still pay what they owe (doesn't always work, of course, but thats the intent). Slashdot ought to be all over that.

Comment Re:Some potential, but hardly for a genuine leap (Score 4, Interesting) 282

No other mode of transportation has to carry its own reaction mass and throw it away. Not bicycles, cars, trains, ships, submarines, or airplanes.

Quite right. Because no other form of transportation takes place in a vacuum. Unless you know of some radical new physics, standard reaction-mass engines will be necessary for spaceflight for... well, forever, so, I'm not sure exactly what your point is. And yes, they've worked on the idea before with NERVA. We have, believe it or not, made a few technological and engineering breakthroughs since then (mind you: NERVA worked. It worked very well. It was canceled for political reasons, not practical ones).

Comment Re:Shame on them (Score 1) 181

Don't say that he's hypocritical, Say rather that he's apolitical. "once the rockets are up, who cares where they come down? That's not my department," says Wernher von Braun.

--Tom Lehrer, "Wernher von Braun"

Ding ding ding ding! We've got a winner here! One Godwin'ed Slashdot thread! (ok, I'm not actually sure you won, but I didn't see anything up above).

Comment Re:Well... (Score 5, Insightful) 236

The goal of these kinds of security measures isn't to prevent people with malicious intent from breaking them. That is, obviously, impossible. It is to make sure people without malicious intent don't engage in activities which are indistinguishable from malicious activity. That in turn means that if you see people engaging in apparent malicious activity, you can safely assume they are, and operate accordingly (i.e. shoot them, arrest them, etc.)

Comment Re:"inescapable conclusion" (Score 4, Informative) 231

> What's more, there is an energy associated with any given volume of the universe. If that volume increases, the inescapable conclusion is that the energy must increase as well. So much for conservation of energy.

??? Why cant the energy just be less dense?

The FLRW metric (which is what the equation that governs the cosmological expansion of spacetime) has a cosmological constant term in it, initially placed there by Einstein to maintain a steady state universe, but which we now know drives an accelerating expansion of the universe. This constant term is exactly that: a constant (negative) energy per volume of space. More space means more total energy.

However, TFS and TFA (I've only scanned the referenced paper, but that looks much more reasonable) are absolutely wrong about why this is a problem. It is a problem, but only in the sense of figuring out where it comes from (i.e. what exact mechanism drives the creation of this energy). The fact that energy is not conserved violates no law of physics: in fact, general relativity doesn't conserve energy anyways, and the expansion of the universe certainly does not (even without the non-conservative nature of gravity).

See, the conservation of energy is a result of Noether's theorem, which states that for any differentiable symmetry of the action of a physical system, there is a corresponding paired conservation law. For time symmetry, this is the conservation of energy. However, time on the scales of the universe is not symmetric. There was a beginning to the universe (which alone breaks the symmetry: you can't shift backwards in time more than ~13 billion years), and the universe as it is now looks nothing like it did 10 billion years ago. So we don't expect energy to be conserved in the universe as a whole (even if it is on local scales).

Comment Re:"Forget about the risk that machines pose to us (Score 4, Insightful) 227

you think it's absolutely impossible that could be achieved in say the next 500 years, considering what humans have accomplished in the last 100?

Absolutely impossible? No. But the problem is that we don't even know where to begin creating a true AI, which means we also know nothing about what threats it may or may not pose... so we also have no actual way to address those threats. All we have right now is pure, 100% complete speculation (no different from speculating about what would happen if we had FTL travel, or psykers, or met aliens). There are plenty of actual threats to humanity that really exist right now (or could be created with our current knowledge and technology), which makes worrying about something we know literally nothing about kind of silly.

Comment Re: Is that engine even running? (Score 1) 89

Fuel injection and spark events only occur at the 10s of Hz scale (topping out at around 60 each per second). Even if you handle cam phasing and MAF sampling at 100 times that interval, you're still within the computational work load of a couple dozen MHz of instructions.

Aye. Now try controlling the engine and fuel injection system, and achieving combustion, without using a spark plug. Because that's what the story is actually about. They're using compression-based ignition (like a diesel engine) rather than spark based ignition (like a conventional gasoline engine), which requires a detailed knowledge of the state of the engine at each cycle.

The research is only interesting because they are taking advantage of way overspecced processing power to approach combustion more granularly per event and trying to learn from each one and control the next. It only got press here because they used Linux (anything production grade would use QNX or similar).

Nah, it's interesting because it's an application of machine learning algorithms applied to an actual physical problem that might have real-world practical use (the whole Rasberry Pi/Linux angle is a side note in the paper just to show that the algorithm is fast enough for real-world application). A possible 30% fuel efficiency increase in all/some new cars? Car makers would certainly be interested.

Comment Re:Whoever is in physical possession of the drugs (Score 4, Informative) 182

It's not okay, but it wouldn't be murder either. It would be manslaughter.

That depends. If you should have known the gun would have a significant chance of hitting someone, you could well be facing a full murder charge. Randomly shooting a gun in a field in the country? Probably manslaughter. Doing the same in a crowded shopping mall? Yep, that'd be murder. Likewise this bot was shopping randomly on a darknet that has a lot of illegal stuff for sale, and the creator would (absolutely should have, anyways) have known that, which means he would be legally liable for the purchases (if the government decided to press the issue).

Comment Re:Horrible Summary (Score 1) 86

(sigh) You're doing it wrong - that link you gave is the wrong one . The article the summary links to has a link to the correct (and non-paywalled) article at arXiv.org. Have a nice day :-)

The link the GP gave is to the paper linked directly to by the summary (the direct link to the abstract), so some confusion is understandable. In the future, maybe make submissions discuss one and only one paper (or make it obvious they're two papers)?

Comment Re:How much benefit? (Score 4, Informative) 226

It looks like the slowest paths of the transcendental functions were improved by a lot. But how often do these paths get used? The article doesn't say so the performance benefits may be insignificant.

From TFA, it sounds like the functions in question (pow() and exp()) work by first looking at a look-up table/polynomial approximation technique to see if the function can use that to get an accurate-enough value, then running through a series of multiplications to calculate the result if the table/approximation method wouldn't be accurate enough. Since this work improved the actual calculation part, my guess is that it will improve quite a few cases. TFA does say the lookup table method works in the "majority of cases", though it doesn't say exactly how big a majority, so it's hard to say exactly.

Comment Re:Torvalds is half right (Score 1) 449

The nature of the workload required for most workstations is non-uniform processing of large quantities of discreet, irregular tasks. For this, parallelism (as Torvald's correctly notes) is likely not the most efficient approach. To pretend that in some magical future, our processing needs can be homogenized into tasks for which parallel computing is superior is to make a faith-based prediction on how our use of computers will evolve. I would say that the evidence is quite the opposite: That tasks will become more discrete and unique.

Right, but we want to continue the "Moore's Law" speedup of processing year over year. And that simply can't happen with single core processing: clock speed is already near the physical limit (as in we would need to start violating the speed of light to increase it much further), and manufacturing process size can't continue shrinking indefinitely either, no matter how close we are to the actual physical limits there. So unless we invent entirely new computing systems (e.g. quantum computers), the only speed gains in the future will inevitably be from parallelization, and there are (for many cases) still massive speed gains to be made in that field, simply because the software was never designed for any parallelization at all. Granted, that'll hit a wall where you can't split tasks up anymore as well, but in many cases this process hasn't even started.

You're quite right about the graphics, though: the long-term future of graphics technology is probably ray-tracing, and that takes absolutely massive amounts of completely parallel CPU power.

Slashdot Top Deals

The one day you'd sell your soul for something, souls are a glut.

Working...