Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Just another step closer... (Score 1) 205

You make good points. However, I think you're somewhat mischaracterizing the modern theories that include parallel universes.

So long as we use the real physicists definitions and not something out of Stargate SG1, those parallels will always remain undetectable. SF writers tell stories about interacting with other universes - physicists define them in ways that show they can't be interacted with to be verified.

(emphasis added) Your implication is that physicists have invented parallel universes, adding them to their theories. In actuality, parallel realities are predictions of certain modern theories. They are not axioms, they are results. Max Tegmark explains this nicely in a commentary (here or here). Briefly: if unitary quantum mechanics is right (and all available data suggests that it is), then this implies that the other branches of the wavefunction are just as real as the one we experience. Hence, quantum mechanics predicts that these other branches exist. Now, you can frame a philosophical question about whether entities in a theory 'exist' or whether they are just abstractions. But it's worth noting that there are plenty of theoretical entities that we now accept as being real (atoms, quarks, spacetime, etc.). Moreover, there are many times in physics where, once we accept a theory as being right, we accept its predictions about things we can't directly observe. Two examples would be: to the extent that we accept general relativity as correct, we make predictions about the insides of black holes, even though we can't ever observe those areas. To the extent that we accept astrophysics and big-bang models, we make predictions about parts of the universe we cannot ever observe (e.g. beyond the cosmic horizon).

An untestable idea isn't part of science.

Indeed. But while we can't directly observe other branches of the wavefunction, we can, through experiments, theory, and modeling, indirectly learn much about them. We can have a lively philosophical debate about to what extent we are justified in using predictions of theories to say indirect things are 'real' vs. 'abstract only'... but my point is that parallel realities are not alone here. Every measurement we make is an indirect inference based on limited data, extrapolated using a model we have some measure of confidence in.

Occam's Razor ...

Occam's Razor is frequently invoked but is not always as useful as people make it out to be. If you have a theory X and a theory X+Y that both describe the data equally well, then X is better via Occam's Razor. But if you're comparing theories X+Y and X+Z, it's not clear which is "simpler". You're begging the question if you say "Clearly X+Y is simpler than X+Z! Just look at how crazy Z is!" More specifically: unitary quantum mechanics is arguably simpler than quantum mechanics + collapse. The latter involves adding an ad-hoc, unmeasured, non-linear process that has never actually been observed. The former is simpler at least in description (it's just QM without the extra axiom), but as a consequence predicts many parallel branches (it's actually not an infinite number of branches: for a finite volume like our observable universe, the possible quantum states is large but finite). Whether an ad-hoc axiom or a parallal-branch-prediction is 'simpler' is debatable.

Just about any other idea looks preferrable to an idea that postulates an infinite number of unverifiable consequents.

Again, the parallel branches are not a postulate, but a prediction. They are a prediction that bother many people. Yet attempts to find inconsistencies in unitary quantum mechanics so far have failed. Attempts to observe the wavefunction collapse process have also failed (there appears to be no limit to the size of the quanum superposition that can be generated). So the scientific conclusion is to accept the predictions of quantum mechanics (including parallel branches), unless we get some data that contradicts it. Or, at the very least, not to dismiss entirely these predictions unless you have empirical evidence against either them or unitary quantum mechanics itself.

Comment Re:Can't have it both ways (Score 1) 330

I disagree. Yes, there are tensions between openness/hackability/configurability/variability and stability/manageability/simplicity. However, the existence of certain tradeoffs doesn't mean that Apple couldn't make a more open product in some ways without hampering their much-vaunted quality.

One way to think about this question to analyze whether a given open/non-open decision is motivated by quality or by money. A great many of the design decisions that are being made are not in the pursuit of a perfect product, but are part of a business strategy (lock-in, planned obsolescence, upselling of other products, DRM, etc.). I'm not just talking about Apple, this is true very generally. Examples:
- Having a single set of hardware to support does indeed make software less bloated and more reliable. That's fair. Preventing users from installing new hardware (at their own risk) would not be fair.
- Similarly, having a restricted set of software that will be officially supported is fine. Preventing any 'unauthorized' software from running on a device a user has purchased is not okay. The solution is to simply provide a checkbox that says "Allow 3rd party sources (I understand this comes with risks)" which is what Android does but iOS does not.
- Removing seldom-used and complex configuration options from a product is a good way to make it simpler and more user-friendly. But you can easily promote openness without making the product worse by leaving configuration options available but less obvious (e.g. accessed via commandline flags or a text config file).
- Building a product in a non-user-servicable way (no screws, only adhesives, etc.) might be necessary if you're trying to make a product extremely thin and slick.
- Conversely, using non-standard screws, or using adhesives/etc. where screws would have been just as good, is merely a way to extract money from customers (forcing them to pay for servicing or buy new devices rather than fix old hardware).
- Using bizarre, non-standard, and obfuscated file formats or directory/data-structures can in some cases be necessary in order to achieve a goal (e.g. performance). However in most cases it's actually used to lock-in the user (prevent user from directly accessing data, prevent third-party tools from working). E.g. the way that iPods appear to store the music files and metadata is extremely complex, at least last time I checked (all files are renamed, so you can't simply copy files to-and-from the device). The correct solution is to use open formats. In cases where you absolutely can't use an established standard, the right thing to do is to release all your internal docs so that others can easily build upon it or extend it.

To summarize: yes, there are cases where making a product more 'open' will decrease its quality in other ways. But, actually, there are many examples where you can leave the option for openness/interoperability without affecting the as-sold quality of the product. (Worries about 'users breaking their devices and thus harming our image' do not persuade; the user owns the device and ultimately we're talking about experience users and third-party developers.) So, we should at least demand that companies make their products open in all those 'low-hanging-fruit' cases. We can then argue in more detail about fringe cases where there is really a openness/quality tradeoff.

Comment Re:n = 1.000000001 (Score 3, Informative) 65

I'm somewhat more hopeful than you, based on advances in x-ray optics.

For typical x-ray photons (e.g. 10 keV), the refractive index is 0.99999 (delta = 1E-5). Even though this is very close to 1, we've figured out how to make practical lenses. For instance Compound Refractive Lenses use a sequence of refracting interfaces to accumulate the small refractive effect. Capillary optics can be used to confine x-ray beams. A Fresnel lens design can be used to decrease the thickness of the lens, giving you more refractive power per unit length of the total optic. In fact, you can use a Fresnel zone plate design, which focuses the beam due to diffraction (another variant is a Laue lens which focuses due to Bragg diffraction, e.g. multilayer Laue lenses are now being used for ultrahigh focusing of x-rays). Clever people have even designed lenses that simultaneously exploit refractive and diffractive focusing (kinoform lenses).

All this to say that with some ingenuity, the rather small refractive index differences available for x-rays have been turned into decent amounts of focusing in x-ray optics. We have x-rays optics now with focal lengths on the order of meters. It's not trivial to do, but it can be done. It sounds like this present work is suggesting that for gamma-rays the refractive index differences will be on the order of 1E-7, which is only two orders-of-magnitude worse than for x-rays. So, with some additional effort and ingenuity, I could see the development of workable gamma-ray optics. I'm not saying it will be easy (we're still talking about tens or hundreds of meters for the overall camera)... but for certain demanding applications it might be worth doing.

Comment Is this really new? (Score 2) 93

Unfortunately the article is dumbed down a lot, so it is not easy to understand what technology is actually supposed to be used. But this sound a lot like a Rapid Thermal Anneal (RTA/RTP), which has been used for decades in semiconductor manufacturing. It has also been used a lot in lab environment to manufacture solar cells. It is possible that the energy consumption can be reduced, but the tool throughput and maintenance costs are quite a bit higher than that of a conventional furnace. I suppose that is why it did not catch on so far.

Comment Re:Business potential in going green (Score 1) 410

Australia is extremely dumb when it comes to renewable energies and especially photovoltaics. Yes, it is that harsh. AUS has some of the most prolific research institutes in that area (The UNSW and the ANU) and provides ideal conditions for electricity generation by solar energy. Yet, they completely and utterly failed to capitalize on this aspect. There are no photovolatic companies of relevance in Australia and there are hardly any photovoltaic power plants.

The UNSW is now degenerated to educating recruits for chinese solar cell companies. Well done, Oz government, I hope sheep breeding and mining will be relevant for another century.

Comment Re:Remember carbon nanotubes? (Score 1) 345

Yes, I was aware of these approaches of opening a band gap. I also recall a recent paper about field induced band gap opening. 250meV is not a lot, but it is a beginning. A band gap as small as this will still lead to serious junction leakage. Nowaday the ability to turn transistors off has become crucial; a major advantage of intels recently announced 22nm tri gate technology is that transistors can be turned off much more efficiently.

I don't think graphene transistors would require a significant investment. Apart from the tools to deposit the graphene, all other tools can be reused, provided that silicon is still the base material. Investing has never been a big issue for the larger companies.

Comment Re:Remember carbon nanotubes? (Score 1) 345

But which applications involving carbon nanotubes are available on commercial scale today? I am only aware of it being used as (expensive) filler material.

CNTs are one of the topics which belong into the "pure science" realm. The main issues here are that no reliable method exist to separate metallic from semiconducting CNTs on large scale and that there is no reliable way of mass manufacturing CN transistors structurally.

Regarding graphene, there are at least methods to produce it on a wafer scale basis. The problem is, however, that despite the promising electron mobility in graphene, the electrical properties of graphene transistors are extremely bad. The latter is owed to the absence of a band gap and issues with junction formation.

Comment Remember carbon nanotubes? (Score 3, Interesting) 345

A few years ago all the rage was about carbon nanotubes. An entire generation of phd students was raised on this material. Carbon nanotubes were the material of the future, enabling the space elevator, nanoscale transistors, near-superconductor conductivity and so on. What is left today?

Even before that there were C60 buckyballs, another previously unnoticed carbon allotrope. Buckyballs were set to revolutionize chemistry and were (are) part of n-type organic semicunductors. What is left today?

A fad is a fad, even in science. Of all the imagined applications a few will remain, and will be turned into real applications by technologists and engineers. The scientists will move on to the next fad - well at least those who are quick enough.

Comment Re:Still shocked! (Score 1) 121

Analog is getting bigger and bigger. Many applications are driven by "green" technology - power devices for electric cars, control circuits and switching converters for power conversion, LED controllers and so on. The automotive semiconductor industry is very delighted with the current development. The last figures I heard were that 20-30% of the costs of a european mid range car are electronics, with a sharp upwards trend. American cars and cars for the american market are usually based on slightly simpler and older technology.

Another thing is that the market entry barriers for analog devices are higher than for digital ones. Analog devices can often not be designed as versatile as digital ones. That is why you need a very wide product range and a good customer relationship. Furthermore, you simply can not hire good analog designers out of school. All of these things combined means that there is a lof of cash in analog.

Comment Better news source (Score 4, Interesting) 769

I found this to be a good source for uncommented information: http://www.world-nuclear-news.org/. I cannot vouch for the veracity of the source, but it does not seem to be very biased.

Unfortunately the nuclear accident seems to have overshadowed reports on the real human tragedy - the tsunami and the earth quake. Especially in Germany, media are instrumentalizing the incident and are plotting doomsday scenarios. The worst of all seems to be "Der Spiegel", which I held in much higher regard until yesterday.

Comment Re:The PIC was similar (Score 1) 224

>. I don't know what Atmel did to deserve their good luck.

There was a long time when it basically was AVR vs PIC when it came to "small developer" aka hobbyist microcontrollers. The PIC may have been there first, but the AVR architecture is much more user friendly and has a following that is at least as large as that of the PIC. The reason why the AVR is being used in the arduino is probably because of it's high-level language compatible microarchitecture.

Slashdot Top Deals

The one day you'd sell your soul for something, souls are a glut.

Working...