Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Non-deterministic sort (Score 4, Interesting) 195

Human sorting tends to be rather ad-hoc, and this isn't necessarily a bad thing. Yes, if someone is sorting a large number of objects/papers according to a simple criterion, then they are likely to be implementing a version of some sort of formal searching algorithm... But one of the interesting things about a human sorting things is that they can, and do, leverage some of their intellect to improve the sorting. Examples:
1. Change sorting algorithm partway through, or use different algorithms on different subsets of the task. E.g. if you are sorting documents in a random order and suddenly notice a run that are all roughly in order, you'll intuitively switch to a different algorithm for that bunch. In fact, humans very often sub-divide the problem at large into stacks, and sub-sort each stack using a different algorithm, before finally combining the result. This is also relevant since sometimes you actually need to change your sorting target halfway through a sort (when you discover a new category of document/item; or when you realize that a different sorting order will ultimately be more useful for the high-level purpose you're trying to achieve; ...).
2. Pattern matching. Humans are good at discerning patterns. So we may notice that the documents are not really random, but have some inherent order (e.g. the stack is somewhat temporally ordered, but items for each given day are reversed or semi-random). We can exploit this to minimizing the sorting effort.
3. Memory. Even though humans can't juggle too many different items in their head at once, we're smart enough that we encounter an item, we can recall having seen similar items. Our visual memory also allows us to home-in on the right part of a semi-sorted stack in order to group like items.

The end result is a sort that is rather non-deterministic, but ultimately successful. It isn't necessarily optimal for the given problem space, but conversely their human intellect is allowing them to generate lots of shortcuts during the sorting problem. (By which I mean, a machine limited to paper-pushing at human speed, but implementing a single formal algorithm, would take longer to finish the sort... Of course in reality mechanized/computerized sorting is faster because each machine operation is faster than the human equivalent.)

Submission + - Alternatives to Slashdot post beta? 8

An anonymous reader writes: Like many Slashdotters, I intend to stop visiting Slashdot after the beta changeover. After years of steady decline in the quality of discussions here, the beta will be the last straw. What sites alternative to Slashdot have others found? The best I have found has been arstechnica.com, but it has been a while since I've looked for tech discussion sites.

Submission + - Slashdot BETA Discussion (slashdot.org) 60

mugnyte writes: With Slashdot's recent restyled "BETA" slowly rolled to most users, there's been a lot of griping about the changes. This is nothing new, as past style changes have had similar effects. However, this pass there are significant usability changes: A narrower read pane, limited moderation filtering, and several color/size/font adjustments. BETA implies not yet complete, so taking that cue — please list your specific, detailed opinoins, one per comment, and let's use the best part of slashdot (the moderation system) to raise the attention to these. Change can be jarring, but let's focus on the true usability differences with the new style.

Submission + - Slashdot creates beta site users express theirs dislike (slashdot.org) 4

who_stole_my_kidneys writes: Slashdot started redirecting users in February to its newly revamped webpage and received a huge backlash from users. The majority of comments dislike the new site while some do offer solutions to make it better. The question is will Slashdot force the unwanted change on its users that clearly do not want change?

Submission + - Once Slashdot beta has been foisted upon me, what site should I use instead? 2

somenickname writes: As a long time Slashdot reader, I'm wondering what website to transition to once the beta goes live. The new beta interface seems very well suited to tablets/phones but, it ignores the fact that the user base is, as one would expect, nerds sitting in front of very large LCD monitors and wasting their employers time. It's entirely possible that the browser ID information gathered by the site has indicated that they get far more hits on mobile devices where the new interface is reasonable but, I feel that no one has analyzed the browser ID (and screen resolution) against comments modded +5. I think you will find that most +5 comments are coming from devices (real fucking computers) that the new interface does not support well. Without an interface that invites the kind of users that post +5 comments, Slashdot is just a ho-hum news aggregation site that allows comments. So, my question is, once the beta is the default, where should Slashdot users go to?

Submission + - Slashdot beta sucks 9

An anonymous reader writes: Maybe some of the slashdot team should start listening to its users, most of which hate the new user interface. Thanks for ruining something that wasn't broken.

Comment Re:Just another step closer... (Score 1) 205

You make good points. However, I think you're somewhat mischaracterizing the modern theories that include parallel universes.

So long as we use the real physicists definitions and not something out of Stargate SG1, those parallels will always remain undetectable. SF writers tell stories about interacting with other universes - physicists define them in ways that show they can't be interacted with to be verified.

(emphasis added) Your implication is that physicists have invented parallel universes, adding them to their theories. In actuality, parallel realities are predictions of certain modern theories. They are not axioms, they are results. Max Tegmark explains this nicely in a commentary (here or here). Briefly: if unitary quantum mechanics is right (and all available data suggests that it is), then this implies that the other branches of the wavefunction are just as real as the one we experience. Hence, quantum mechanics predicts that these other branches exist. Now, you can frame a philosophical question about whether entities in a theory 'exist' or whether they are just abstractions. But it's worth noting that there are plenty of theoretical entities that we now accept as being real (atoms, quarks, spacetime, etc.). Moreover, there are many times in physics where, once we accept a theory as being right, we accept its predictions about things we can't directly observe. Two examples would be: to the extent that we accept general relativity as correct, we make predictions about the insides of black holes, even though we can't ever observe those areas. To the extent that we accept astrophysics and big-bang models, we make predictions about parts of the universe we cannot ever observe (e.g. beyond the cosmic horizon).

An untestable idea isn't part of science.

Indeed. But while we can't directly observe other branches of the wavefunction, we can, through experiments, theory, and modeling, indirectly learn much about them. We can have a lively philosophical debate about to what extent we are justified in using predictions of theories to say indirect things are 'real' vs. 'abstract only'... but my point is that parallel realities are not alone here. Every measurement we make is an indirect inference based on limited data, extrapolated using a model we have some measure of confidence in.

Occam's Razor ...

Occam's Razor is frequently invoked but is not always as useful as people make it out to be. If you have a theory X and a theory X+Y that both describe the data equally well, then X is better via Occam's Razor. But if you're comparing theories X+Y and X+Z, it's not clear which is "simpler". You're begging the question if you say "Clearly X+Y is simpler than X+Z! Just look at how crazy Z is!" More specifically: unitary quantum mechanics is arguably simpler than quantum mechanics + collapse. The latter involves adding an ad-hoc, unmeasured, non-linear process that has never actually been observed. The former is simpler at least in description (it's just QM without the extra axiom), but as a consequence predicts many parallel branches (it's actually not an infinite number of branches: for a finite volume like our observable universe, the possible quantum states is large but finite). Whether an ad-hoc axiom or a parallal-branch-prediction is 'simpler' is debatable.

Just about any other idea looks preferrable to an idea that postulates an infinite number of unverifiable consequents.

Again, the parallel branches are not a postulate, but a prediction. They are a prediction that bother many people. Yet attempts to find inconsistencies in unitary quantum mechanics so far have failed. Attempts to observe the wavefunction collapse process have also failed (there appears to be no limit to the size of the quanum superposition that can be generated). So the scientific conclusion is to accept the predictions of quantum mechanics (including parallel branches), unless we get some data that contradicts it. Or, at the very least, not to dismiss entirely these predictions unless you have empirical evidence against either them or unitary quantum mechanics itself.

Comment Re:Can't have it both ways (Score 1) 330

I disagree. Yes, there are tensions between openness/hackability/configurability/variability and stability/manageability/simplicity. However, the existence of certain tradeoffs doesn't mean that Apple couldn't make a more open product in some ways without hampering their much-vaunted quality.

One way to think about this question to analyze whether a given open/non-open decision is motivated by quality or by money. A great many of the design decisions that are being made are not in the pursuit of a perfect product, but are part of a business strategy (lock-in, planned obsolescence, upselling of other products, DRM, etc.). I'm not just talking about Apple, this is true very generally. Examples:
- Having a single set of hardware to support does indeed make software less bloated and more reliable. That's fair. Preventing users from installing new hardware (at their own risk) would not be fair.
- Similarly, having a restricted set of software that will be officially supported is fine. Preventing any 'unauthorized' software from running on a device a user has purchased is not okay. The solution is to simply provide a checkbox that says "Allow 3rd party sources (I understand this comes with risks)" which is what Android does but iOS does not.
- Removing seldom-used and complex configuration options from a product is a good way to make it simpler and more user-friendly. But you can easily promote openness without making the product worse by leaving configuration options available but less obvious (e.g. accessed via commandline flags or a text config file).
- Building a product in a non-user-servicable way (no screws, only adhesives, etc.) might be necessary if you're trying to make a product extremely thin and slick.
- Conversely, using non-standard screws, or using adhesives/etc. where screws would have been just as good, is merely a way to extract money from customers (forcing them to pay for servicing or buy new devices rather than fix old hardware).
- Using bizarre, non-standard, and obfuscated file formats or directory/data-structures can in some cases be necessary in order to achieve a goal (e.g. performance). However in most cases it's actually used to lock-in the user (prevent user from directly accessing data, prevent third-party tools from working). E.g. the way that iPods appear to store the music files and metadata is extremely complex, at least last time I checked (all files are renamed, so you can't simply copy files to-and-from the device). The correct solution is to use open formats. In cases where you absolutely can't use an established standard, the right thing to do is to release all your internal docs so that others can easily build upon it or extend it.

To summarize: yes, there are cases where making a product more 'open' will decrease its quality in other ways. But, actually, there are many examples where you can leave the option for openness/interoperability without affecting the as-sold quality of the product. (Worries about 'users breaking their devices and thus harming our image' do not persuade; the user owns the device and ultimately we're talking about experience users and third-party developers.) So, we should at least demand that companies make their products open in all those 'low-hanging-fruit' cases. We can then argue in more detail about fringe cases where there is really a openness/quality tradeoff.

Comment Re:n = 1.000000001 (Score 3, Informative) 65

I'm somewhat more hopeful than you, based on advances in x-ray optics.

For typical x-ray photons (e.g. 10 keV), the refractive index is 0.99999 (delta = 1E-5). Even though this is very close to 1, we've figured out how to make practical lenses. For instance Compound Refractive Lenses use a sequence of refracting interfaces to accumulate the small refractive effect. Capillary optics can be used to confine x-ray beams. A Fresnel lens design can be used to decrease the thickness of the lens, giving you more refractive power per unit length of the total optic. In fact, you can use a Fresnel zone plate design, which focuses the beam due to diffraction (another variant is a Laue lens which focuses due to Bragg diffraction, e.g. multilayer Laue lenses are now being used for ultrahigh focusing of x-rays). Clever people have even designed lenses that simultaneously exploit refractive and diffractive focusing (kinoform lenses).

All this to say that with some ingenuity, the rather small refractive index differences available for x-rays have been turned into decent amounts of focusing in x-ray optics. We have x-rays optics now with focal lengths on the order of meters. It's not trivial to do, but it can be done. It sounds like this present work is suggesting that for gamma-rays the refractive index differences will be on the order of 1E-7, which is only two orders-of-magnitude worse than for x-rays. So, with some additional effort and ingenuity, I could see the development of workable gamma-ray optics. I'm not saying it will be easy (we're still talking about tens or hundreds of meters for the overall camera)... but for certain demanding applications it might be worth doing.

Comment High resolution but small volume (Score 5, Informative) 161

The actual scientific paper is:
C. L. Degen, M. Poggio, H. J. Mamin, C. T. Rettner, D. Rugar Nanoscale magnetic resonance imaging PNAS 2009, doi: 10.1073/pnas.0812068106.

The abstract:

We have combined ultrasensitive magnetic resonance force microscopy (MRFM) with 3D image reconstruction to achieve magnetic resonance imaging (MRI) with resolution <10 nm. The image reconstruction converts measured magnetic force data into a 3D map of nuclear spin density, taking advantage of the unique characteristics of the 'resonant slice' that is projected outward from a nanoscale magnetic tip. The basic principles are demonstrated by imaging the 1H spin density within individual tobacco mosaic virus particles sitting on a nanometer-thick layer of adsorbed hydrocarbons. This result, which represents a 100 million-fold improvement in volume resolution over conventional MRI, demonstrates the potential of MRFM as a tool for 3D, elementally selective imaging on the nanometer scale.

I think it's important to emphasize that this is a nanoscale magnetic imaging technique. The summary implies that they created a conventional MRI that has nanoscale resolution, as if they can now image a person's brain and pick out individual cells and molecules. That is not the case! And that is likely to never be possible (given the frequencies of radiation that MRI uses and the diffraction limit that applies to far-field imaging.

That having been said, this is still a very cool and noteworthy piece of science. Scientists use a variety of nanoscale imaging tools (atomic force microscopes, electron microscopes, etc.), but having the ability to do nanoscale magnetic imaging is amazing. In the article they do a 3D reconstruction of a tobacco mosaic virus. One of the great things about MRI is that is has some amount of chemical selectivity: there are different magnetic imaging modes that can differentiate based on makeup. This nanoscale analog can use similar tricks: instead of just getting images of surface topography or electron density, it could actually determine the chemical makeup within nanostructures. I expect this will become a very powerful technique for nano-imaging over the next decade.

Slashdot Top Deals

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...