Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:hmmm.... (Score 2) 600

Unified theories are more attractive, but every new way of looking at physics (that accurately models reality) is one more potential avenue of insight into the fundamental nature of our universe. This is definitely an exciting discovery, though I do not share their enthusiasm for boiling all of reality down to particle interactions with geometry, rather than statistics.

The Copenhagen interpretation of QM is a disgrace, and any self-respecting scientist should be ashamed to support a theory that hides reality behind a veil of statistics, and uses that as an excuse to cease the pursuit of truth. As useful as QM is for calculations, mainstream physics has been stuck in a rut ever since, with the persisting complacent acceptance of enshrined theory. The same applies to nonsense like BCS theory, even though it isn't nearly as useful as QM.

There is no solid basis for the existence of particles in the first place, much less that the universe is fundamentally statistical in nature. Hopefully, simplifying our understanding of "particle" interactions will help illuminate a world without particles and rubbish like wave-particle duality. It seems a far more rational conclusion that that the wave and quantum nature observed emerge from a reality with a fundamental wave nature, rather than the spectacular contortions necessary with particles.

Wandering a but further off topic, Dr. Johan Prins also developed a very compelling and useful model of superconductivity, which is based on a wave nature of electrons. It dispenses with non-locality, and replaces it with unified waves. Boson "particles" no longer merely share the same energy state, they may merge into a unified wave, or split into quantum--as defined by boundary conditions. (shared electrons in orbitals, photons in lasers, BEC condensates, neutron stars, etc. would follow the same logic; waves constrained by boundary conditions.) It is both predictive and supported by evidence, yet sadly, no one has attempted to verify the results or even consider the theory as far as I know. Apparently, you must not stray too from the accepted dogma, even while other evidence mounts and the prevailing theory continues to fail. More epicycles and such...

Comment Re:Still no encryption... *sigh* (Score 1) 297

There are no satisfactory workarounds, and never will be. The crypto needs to be handled within ZFS or it becomes an over complicated and inefficient mess. (As you are probably aware.) Consider a ZFS mirror on top of two disks encrypted by the OS; even though the data is identical, it now needs to be encrypted twice on write, and decrypted twice on scrub. For ditto blocks, multiply the amount of crypto work by another two or three. There are now (at least) two keys to manage and still no fine granularity access. Adding more vdevs to the pool only exacerbates the problem.

Copy on write transaction oriented filesystems like like ZFS are the natural place for crypto, as constructing a nonce is trivial; simply append the transaction ID, block offset, etc. It couples perfectly with stream ciphers like Salsa20 (or XSalsa20 for the extended 24-byte nonce), and offers the possibility of extremely fast, flexible, and efficient crypto. There is no expensive key setup required and no need to generate ESSIVs. No need to use expensive crypto modes on top of conventional block ciphers, which require multiple encryptions or other expensive operations like GCM. Furthermore, Salsa20/ChaCha is not only highly secure and trustworthy, but extremely fast, simple, and elegant.

After all of the work of hammering a square peg into a round hole with conventional full disk encryption, performance of Salsa20 in ZFS/HAMMER/btrfs would rival hardware accelerated block device crypto and be useful on a far greater array of hardware. (Typically it should even surpass it, as redundant crypto operations are eliminated.)

There is ongoing work on ZFS crypto at https://github.com/zfsrogue/zfs-crypto, though I'm not sure how it is progressing. Having zfs-crypto integrated would be very useful, not only for efficiency reasons, but for the simple and flexible key management. While there are alternatives to a number of other features that ZFS offers, none of them come close to offering the flexibility and convenience of ZFS.

Comment Re:Does the UK get any say? (Score 1) 148

It is worth mentioning that this issue is exclusive to solid fuel reactors. With fluid fuel, the problematic gasses just bubble out, and require no special attention. There are no safety issues with rapidly cycling a LFTR or other molten salt reactor. In practice, they will only be limited by how fast the turbine can spin up and down, and the reaction will follow suit.

Comment Re:Another failure of "unlimited" bandwidth (Score 1) 555

The issue here isn't exactly net neutrality, it's that Google has to have some way of stopping users from sucking up all the bandwidth.

If the ISPs quit insisting on these fake "unlimited" bandwidth plans, there wouldn't be a need to have weird rules to stop people from running high-bandwidth servers.

Yes, selling something you can't provide is asking for trouble. If many people are saturating gigabit links 24/7, the pricing needs to allow for that. However, transfer caps and violations of net neutrality are not the answer.

The best solution is to advertise honestly, and the FCC should enforce this. Connections should be sold by minimum guaranteed rate, and maximum burst rate, with all else neutral. If there is any prioritization, it should only be among a customers own packets, at their request. This is easily done with guaranteed bandwidth available, and without any violation of network neutrality or impact on other users.

Selling by guaranteed rate has the advantage that there is a hard relationship between what is offered and what is purchased. If customers purchase more, the ISP must build more--rather than arbitrarily oversubscribing their networks. This incentive to invest in infrastructure is completely missing today. Since most people won't be using 100% of their allotted bandwidth, the excess can be divided fairly.

This would work great for google, offering something like 50Mb/s guaranteed and 1Gb/s burst. The minimum guaranteed bandwidth that cable and phone companies can provide would certainly look pitiful by comparison, but would be an honest reflection of what they can actually provide, rather than the meaningless "up to" numbers.

Comment Re:Nuclear steam (Score 4, Informative) 181

However, the carbon moderator core at over 600 C scares me. What if oxygen gets in there? Burning core, reminscent of Chernobyl. Very scary.

Contrary to what you might think, carbon is actually safer at those temperatures. Under neutron bombardment at low temperatures, the Wigner energy can build up, and that is the source of the problems. However, at the operating temperatures of molten salt reactors, solid graphite is quite safe. (You can purchase graphite crucibles good to 2500C.) There is further discussion here.

Comment Productivity is a good thing, jobs are not... (Score 2) 213

The ultimate goal of our endeavors should be to produce wealth for human beings, not mindless jobs, nor backbreaking labor. If tedious and burdensome tasks like agriculture, manufacturing, and mining can be done by machines, all the better. That should free up people to do other things, including not slaving away for 40-60 hours a week. Increases in productivity are always a good thing--the problem is in the distribution of wealth, or rather the utter lack thereof nowadays. As jobs inevitably evaporate, we need to find new and better ways of doing this.

One particular area of productivity deserves special mention. Virtually all of wealth is derived from energy, yet energy has no intrinsic value. It is purely an input, so energy generation should be done as cheaply and efficiently as possible, as it compounds the cost of everything else. It is asinine to make it into a jobs program, yet that is exactly what Obama has done with his recent proposal.

Comment Re:NIMBY (Score 1) 436

Pebble and molten salt reactors still benefit from everything that was learned from past mistakes. If you had a pebble or MSR reactor with Chernobyl-era knowledge and experience, Chernobyl would likely still have happened: still stuck with a massive power surge once radon poisoning clears up. Same for TMI and Fukushima. Pebbles and molten salt may be more convenient and safer to handle and process but there is very little they can do to prevent operator, design and construction errors.

You mean Xenon poisoning? That problem doesn't exist in an MSR, as it bubbles out and doesn't build up. Just one of many ways in which MSRs are inherently superior.

Pebbles still trap volatile fission products and poisons, and could pose a problem if damaged. Moreover, they magnify the waste stream with graphite, and make reprocessing virtually impossible. As with other solid fuels, they can not be burned completely, and require long term isolation. Of course, they also require an expensive fabrication process up front as well. All considered, I'm not sure why so many people find them attractive, when fluid fuels are so clearly superior.

Comment End of Freedom of Speech and Democracy (Score 1) 161

Aside from the apocalypse, that is one of the things I worry about. Shills are bad enough today, but imagine if they could be deployed programmatically; just about any form of online speech could be drowned out with ease. That is assuming that the government/corporations aren't already using AI to accomplice pervasive censorship.

Before this gets out of hand, we need to head it off by deploying peer to peer communications systems with a pervasive trust model. This doesn't necessarily preclude anonymity or AI participation, but they would have a significantly more difficult time of gaining trust in the first place.

Submission + - Monty Montgomery: Next-Next Generation Video: Introducing Daala (livejournal.com)

An anonymous reader writes: Xiph.Org has been working on Daala, a new video codec for some time now, though Opus work had overshadowed it until just recently. With Opus finalized and much of the mop-up work well in hand, Daala development has taken center stage.

I've started work on 'demo' pages for Daala, just like I've done demos of other Xiph development projects. Daala aims to be rather different from other video codecs (and it's the first from-scratch design attempt in a while), so the first few demo pages are going to be mostly concerned with what's new and different in Daala.

I've finished the first 'demo' page (about Daala's lapped transforms), so if you're interested in video coding technology, go have a look!

Comment Re:Versus H264 advantages are what? (Score 1) 161

I'm assuming that the speed is speed of encoding rather than playback? This isn't something that many people are particularly worried about.

I think the 40% slower than vp8 was for playback. However, encoding speed also has to be reasonable, or there will be no content. With youtube, google may have a good head start if they succeed in getting others on board.

As your information is all taken from Google please take it with a huge pinch of salt. Google are bound to present a rosy view of VP9 in comparison with h265 given their investment in it.

Of course, but if the presented videos are any indication, it looks quite good. It doesn't have to be perfect. As long as it is significantly better than h.264 and close to h.265, it should find widespread use. Opus is also an excellent audio codec with very low latency, which will make this a good option for real time communications.

Personally I'm not keen on this. I don't care if its royalty free and unencumbered by patents. I don't want a single entity in control of a standard and regardless of the open nature of this Google are still in control. If this were Apple* or releasing a patent free codec to the world would you be so welcoming?

* Don't be dismissive of this Apple/NeXT do have a decent record of open source software releases.

Once the bitstream is frozen and the codec is open, you are free to use it as you like. The code is already available under a BSD-style license, though I do expect that a formal standard will be forthcoming. The availability of hardware should also provide incentive not to break compatibility in future versions.

I don't see how having the MPEG LA in control of any standard is a better option. The primary difference is that developers are free to incorporate this into applications without the trouble and cost of licensing. That directly benefits users, so perhaps you should care about royaltys and patents. The x.264 project is great, but it is limited in that respect.

Comment Re:Versus H264 advantages are what? (Score 5, Informative) 161

From a technical point of view it's basically h265's peer. That's partially because it's largely based on the same tech as h265, in the same way VP8 was largely similar to h264. And is speculated that it has the same licensing issues that VP8 had, for most of the same reasons.

And the speed issue is entirely due to an almost complete lack of hardware support. And while h265 already has announced and demonstrated support, I am not aware of any VP9 support so far.

And doing VP9 decode in software has order-of-magnitude higher requirements than VP8. If YouTube serves up a VP9 video to your phone, you'll wish for the good old days of Flash video.

From the q&a afterward, it is mentioned that average vp9 quality is within 1% of h.265, but it didn't sound like h.265 was anywhere near ready to roll out, with the only available option being a horrifically slow reference encoder. As for speed, they claim it is about 40% slower than vp8, which is twice as fast as h.264. As such, vp9 should handily outperform h.264 in software.

The open source and royalty free vp9/opus combination sounds like an very compelling option for the html5 video tag, and may become a de facto standard before h.265 is widely deployed. Hardware support for vp9 is also in the works, so if the codec lives up to the claims, there no longer appears to be any good reason to put up with the MPEG LA.

Comment Re:Power Efficiency - MIPS vs ARM (Score 2) 238

As the folk at Transmeta (and others) demonstrated logic to decode any random ISA and drive a RISC core faster than the old VAX microcode days is very possible. This seems to be the way of modern processors. So ARM/x86/x86_64 ISA almost does not matter except to the compiler and API/ABI folk. If you want to go fast feed your compiler folk well.

One of the best ways you can help the compiler folk is with an orthogonal and sensible architecture. Furthermore, consider that generating good code is a problem that must be solved for every language, so starting with a good ISA makes for a lot less work.

Slashdot Top Deals

All I ask is a chance to prove that money can't make me happy.

Working...