Please create an account to participate in the Slashdot moderation system


Forgot your password?

Comment Re:Let's look at what their record has been? (Score 1) 93

To pre-empt nitpicks, when I said this:

The predictions from 2006 are predictions for 2012.

I'm well-aware that 2006+5=2011. I'm trying to be as generous as possible in my assessment. If you make a prediction at the very end of 2006, for "5 years in the future", then you have until Jan 31 2011 for that prediction to come true (and the results should be visible in 2012). Thus, their 2007 predictions have until the very last day of 2012 to be realized, if we want to be generous.

Of course even being generous, their predictions are rather awful.

Comment Re:Let's look at what their record has been? (Score 5, Interesting) 93

Let's delve into the details a bit. The predictions from 2006 are predictions for 2012. Have they come to pass?

1. Prediction: "We will be able to access healthcare remotely, from just about anywhere in the world" The prediction describes online health records, and telemedicine.
Reality: There have been some efforts, in some countries, to digitize records. Many have failed, some are moving forward. However, to my knowledge, none of them have gained wide acceptance (nor overcome the huge privacy and legal obstacles). The current level of web-integration of our records today is not much different from 2006. As for telemedicine? There have been a few more flashy proof-of-principle demonstrations, but nothing has become routine.

2. Prediction: "Real-time speech translation—once a vision only in science fiction—will become the norm"
Reality: Microsoft recently demonstrated realtime English-to-Chinese translation. However, the very media buzz about that shows that it is far from "the norm". What we have is just tightly-controlled tech demos, not technology integrated into all of our smartphones ("the norm"). It's likely that existing software will get better (text translation has become amazingly good of late)... but it didn't happen within the 5 years they estimated.

3. Prediction: "There will be a 3-D Internet", by which they seemd to have meant three-dimensional navigation/environments (virtual-reality-like).
Reality: Same as 2006, really. We had Second Life, and we still do. We had 3D video-games, and we still do. In fact, this was quite a silly prediction to make in 2006, given how much was already known at that time...

4. Prediction: "Technologies the size of a few atoms will address areas of environmental importance"; this is a vague prediction wherein they reference "Green Chemistry" as if they invented it (they didn't).
Reality: I don't know how to judge this one, since they didn't really make a prediction. There's been more research in the area of green chemistry. Nothing revolutionary has happened in the last 5 years, though.

5. Prediction: "Our mobile phones will start to read our minds", which they clarify as meaning that "mobile devices and networks to (with consent) learn about their users' whereabouts and preferences"
Reality: We can be generous and say that this has come to pass, in the form of smartphones and their associated ecosystem of apps. As a particular example, Google Now (available on Android 4.1 and later) provides contextual information to the user without the user having to explicitly arrange it. For example it warns you that you have to leave now to get to a particular appointment (based on knowledge of your location, the appointment location, and current traffic). If you're at a bus stop, it automatically pulls up the schedule. These kinds of tricks are neat, and will no doubt become more sophisticated with time.

So, my assessment is that their past predictions are right about 20% of the time.

Submission + - Google trading suspended, earnings 20% below expectations posted accidentally (

An anonymous reader writes: Trading in Google shares has been suspended after the internet giant released its third-quarter results early by mistake. Google blames financial printing firm RR Donnelley for filing an early draft of the results, which had been expected after the closing bell.

Shares in Google were down 9% when trading in the stock was suspended. Shares had fallen as much as 10.5% at one stage.

In a statement, Google said: "Earlier this morning RR Donnelley, the financial printer, informed us that they had filed our draft 8K earnings statement without authorisation... We have ceased trading on Nasdaq while we work to finalise the document. Once it's finalised we will release our earnings, resume trading on Nasdaq and hold our earnings call as normal at 1:30 PST."

Comment Re:High Skilled Professions put in more hours (Score 5, Insightful) 454


The letter-to-students suggests that 80-hours should be the regular work-week, that works out to:
16/hours a day 5 days per week, or
13/hours a day 6 days per week, or
11/hours a day 7 days per week.

Assuming 7 hours of sleep, three 0.5 hour lunch diversions, 1 hour for commuting, and 0.5 hours/day for bathroom breaks, this leaves the person with about 2.5 hours/day for everything else: running errands, doing laundry, exploring hobbies, relaxing, etc. This is not a fun way to live, and it's also not a sustainable way to live/work: trying to work that hard inevitably results in people being burnt-out, constantly tired, and not very productive. This is especially true in highly-skilled jobs, where the quality of your work comes down to how alert your mind is, and how creative you are... both of which require rest, relaxation, and time spent on diversions.

The 80-hour week is also a lie. That's not how much the professors worked when they were in grad school. No doubt they worked 80-hour weeks on occasion, and those may have even been productive weeks. But there's no way they sustained that kind of work for the entirety of grad school. When I was in grad school we all routinely worked long hours (more than 40 hours/week), and occasionally crazy hours (80 hour/weeks not at all unheard of). But students who tried (e.g. because of pressure from their supervisor) to sustain crazy 70+ hour weeks burned out incredibly quickly.

The letter was trying to encourage the students to work hard and be passionate, which are indeed crucial for grad school. But by setting an arbitrary and frankly ridiculous rule like "80 hours/week" undermines this message.

Comment Re:A giant waste of time (Score 1) 34

The Slashdot headline frames this in terms of "Learning HTML", but it's worth noting that the creators of the game don't view it that way. In their FAQ, they say:

Why "almost educative"? The game might have some educative values, because if you play it you learn things about HTML and the bass rules of programming. But the aim of the game is not to be "educative", it's first to be played, to be fun and enjoyed by everyone. You can eventually learn something but it's a plus... not the ultimate goal.

Comment Re:The numbers (Score 1) 123

This article has the title "Tenfold increase in scientific research papers retracted for fraud", but at least mentions some actual numbers:

In addition, the long-term trend for misconduct was on the up: in 1976 there were only three retractions for misconduct out of 309,800 papers (0.00097%) whereas there were 83 retractions for misconduct out of 867,700 papers at a recent peak in 2007 (0.0096%).

Percentage-wise, we're talking about a very small number of papers. They quote one of the authors:

"The better the counterfeit, the less likely you are to find it – whatever we show, it is an underestimate," said Arturo Casadevall, professor of microbiology, immunology and medicine at the Albert Einstein College of Medicine in New York and an author on the study.

While this is indeed true... even if the true number of misconduct cases is ten-fold what they measured, it's still a small fraction of the literature. Of course, any number of fraudulent papers is cause for concern (and we should work to remedy the situation); but these results should not cause us to call into question the majority of published science. In fact it points towards the vast majority of papers surviving scrutiny.

Comment Re:Your side is always the good guys. (Score 5, Insightful) 233

To expand upon this...

If someone's primary justification for decrying GPL violations is that its wrong to violate copyright, then it would indeed be hypocritical to support piracy of closed-source software. More generally, if the moral argument is that intellectual creation endows people with some intrinsic 'control' or 'ownership' of their creative works, then this moral argument applies equally to open-source and closed-source creations.

However, that is not the only argument in favor of respecting open-source licenses. In fact it may not be the most prevalent. Many people support open-source software because they fundamentally believe in the particular freedoms that are espoused by open-source licenses: that end-users should be unrestricted; that end-users should in fact be empowered to completely control their hardware, which means having the ability to see and edit all source-code; that sharing should be encouraged. Under the moral axioms of 'sharing is good' and/or 'users should be unrestricted' it is not inconsistent to encourage people to respect open-source licenses while simultaneously not respecting restrictive closed-source (or all-rights-reserved) copyrights/EULAs/etc.

My point here is not to promote any particular viewpoint. Rather, I'm responding to OP's assertion that it is hypocritical to support open-source licenses while simultaneously decrying closed-source licenses (or even going to far as to violate them). It may be hypocritical, or it may be consistent. (There's no lack of hypocrisy in this world, Slashdot included.) Many things look hypocritical only because one is making an assumption of the moral precepts that should be followed (normally, one thinks people are hypocritical because their morals are different from your own).

Submission + - It's time to start paying for Android updates (

MrSeb writes: "As the days and weeks continue to flow by like a lazy river, Android 4.0 Ice Cream Sandwich (ICS) is still stuck someplace upstream from the vast majority of users. The newest version of Google’s platform was first released back in November of 2011, and there are still only a handful of devices outside the flagship Galaxy Nexus that run it. Unlike some past updates, this one is a real departure for Android. The user interface has been totally revamped, the stock apps are better than ever, and system-level hardware acceleration is finally available. It’s no secret that the update system for Android is a mess of monumental proportions. Not even Google’s efforts at I/O 2011 produced any concrete solutions. Many users waited the better part of a year for Gingerbread updates on their devices, and still others got no Gingerbread at all. With ICS being as important as it is, it’s time to talk about a radical step to make updates work — it’s time to pay for them."

Comment Re:Sure they can (Score 3, Insightful) 630

The other thing is that many of us on /. may not quite grasp how normal people use computers, and how much simpler something like live tiles could be. How many computers do you see that have a desktop full of icons, people who can't manage simple things like bookmarks etc.

I see what you're saying, but I think Windows 8/Metro is a failure in this regard, mainly because Microsoft didn't go "whole hog" with this new design ethos. If you think of an iPad, it really does reduce complexity for the end user, by getting rid of so many of the things that a normal desktop computer does. This is somewhat annoying if you're trying to do something more complicated, but it does indeed simplify the computing experience for many people.

But in Windows 8, it seems that you have all the usual complexity of the conventional desktop, plus this new Metro thing. So now your average user not only has to manage all the files on the hard-drive, and all the icons on their desktop, and all the windows in the usual desktop/window interface... they additionally have to figure out and manage live tiles. Worse of all, they now have two competing metaphors: desktop windows and live tiles, which sometimes work together, sometimes duplicate functionality, and sometimes are totally distinct ("I remember being able to make this work... but was it a Metro app or a regular desktop app I did it in?").

One of the most basic principles in UI design is consistency. Being consistent lets users develop muscle memory, simplifies their mental model for the computer, and lets them predict the behavior of new, unfamiliar software. Being a slave to consistency can be bad (and stifle innovation), but conversely if you break consistency you need to have a really good reason: the gain in productivity or power must be sufficient to offset the user confusion. (This is at least one reason that we stick with so many arbitrary conventions in our computers: they may not be the best conventions but by being consistent people can at least learn them.)

Windows 8/Metro breaks consistency in a major way. Not just in breaking with tradition (which can be justified if the new interface is sufficiently better), but by having internal inconsistency between the two competing UI metaphors. By not being committing to one or the other, MS is making both of them more confusing.

You may argue that novice users will just stick to the simplicity of Metro, and never be bothered by the complexity of the traditional desktop (which will be available for power users that need it)... but I am unconvinced to say the least. Legacy software will jolt the user back into the desktop. Even novice users have probably used a conventional desktop and will try to get back into it. Metro in general does not appear to reproduce all the functionality of the conventional desktop. So users will now have to flip between the two different modes all the time. In fact some have also argued the opposite: that novice users will stick to the desktop and ignore Metro (or just use it as a fancy app launcher). This still adds needless complexity. Either way, this is a UI disaster.

It's been said so many times that it's almost pointless to say it again: Metro looks like a very nice UI solution for mobile and tablets. But whoever thought it was the future of desktop computing needs to have their head examined.

Comment Re:Torture (Score 4, Interesting) 357

There's that. There's also the fact that these non-lethal weapons are intended to be used against someone who is being violent: in other words, they are a last resort to subdue someone out of control before they do serious harm to someone, whether that be another citizen (either protestor or bystander), a police officer, or even the person hurting themselves. The purpose in using a non-lethal weapon is that in doing this harm to them, you will prevent a much greater harm.

Which, really, highlights how inappropriately all these non-lethal weapons and anti-riot instruments are used nowadays. They've gone from 'preventing imminent violence and harm' to 'making someone unstable easier to deal with' to 'a way to subdue someone, no different from handcuffing them really'. It's positively criminal and evil how thoughtlessly devices like tasers, rubber bullets, and mace are used nowadays by law enforcement. These things were designed as last resorts and are now being used routinely. If a person is being disruptive but there is no imminent threat of harm, then these tools should not be used. Even if the person has clearly broken a law and needs to be arrested, these tools should be avoided: the person should be subdued peacefully somehow (sometimes this means just waiting, letting them yell and whatnot, until they tire themselves out and can be safely arrested).

Comment Re:No headache? (Score 4, Informative) 52

For those will access, here's the actual scientific article:
Alexander M. Stolyarov, Lei Wei, Ofer Shapira, Fabien Sorin, Song L. Chua, John D. Joannopoulos & Yoel Fink Microfluidic directional emission control of an azimuthally polarized radial fibre laser, Nature Photonics 2012 doi: 10.1038/nphoton.2012.24

Here is the abstract:

Lasers with cylindrically symmetric polarization states are predominantly based on whispering-gallery modes, characterized by high angular momentum and dominated by azimuthal emission. Here, a zero-angular-momentum laser with purely radial emission is demonstrated. An axially invariant, cylindrical photonic-bandgap fibre cavity8 filled with a microfluidic gain medium plug is axially pumped, resulting in a unique radiating field pattern characterized by cylindrical symmetry and a fixed polarization pointed in the azimuthal direction. Encircling the fibre core is an array of electrically contacted and independently addressable liquid-crystal microchannels embedded in the fibre cladding. These channels modulate the polarized wavefront emanating from the fibre core, leading to a laser with a dynamically controlled intensity distribution spanning the full azimuthal angular range. This new capability, implemented monolithically within a single fibre, presents opportunities ranging from flexible multidirectional displays to minimally invasive directed light delivery systems for medical applications.

In answer to your question, no this isn't a hologram, although in some sense it achieves a similar goal. Regular screens control the emission of light as a function of position. Holograms control not just the intensity of the emanating light but also the phase; this phase information carries all the extra information about the light field passing through a given plane. This new device controls the intensity and angular spread of the light coming from each pixel, which is also thereby controlling the full shape of the light-field being emitted from the plane of the screen.

With both a hologram and this directional-emission concept, you're controlling the angular spread of the light coming from each point, are thus fully specifying the light-field, and thus creating 'proper 3D' that is physically-realistic and fully convincing. (Assuming you have enough angular resolution in your output to create the small differences the eye is looking for, of course.)

As for why they are using a laser as the source light, it's mostly because they want detailed polarization control. (Coupling lasers into fiber-optics is well-established technology for telecommunications.) By controlling the exact mode of the laser-light propagation through the fiber, they can control the polarization of the light that shines out of the fiber, and thereby use conventional tricks to modulate that light. In particular, in an LCD screen, small fields are used to re-orient liquid-crystal molecules, which then either extinguish or transmit the light (based on whether the orientation of the LC molecule is aligned with the polarization of the light).

Overall it's an ingenious trick: have a light fiber emit light with controlled polarization. Then have a series of LC pixels on the outside of the fiber, whose orientation can now not just modulate the intensity of emission as a function of position along the fiber, but also as a function of angle for each position along the fiber. The end result is that you control the light field emanating from the device, and so can (in principle) reconstruct whatever full-3D image you want.

Of course the prototype in the article only has four LC channels along the fiber. Enough to create a different image on the front vs. the back of the screen. Not nearly enough to create realistic 3D. Also they are only controlling the angle in one direction (around the fiber axis), and not the other (the tilt angle with respect to the fiber axis). But scaling up of the concept (where the fiber has thousands of LC polarizers for various angles) should allow for some really amazing display technology.

Comment Re:No headache? (Score 5, Informative) 52

Is there a word for where both eyes' 'beams' are pointing to?

That's usually called convergence. It's one of at least 5 ways that humans infer distances and reconstruct the third dimension from what they see:
1. Focal depth: based on how much the eye's lens has to focus
2. Convergence: based on the slight differences in pointing of the two eyes
3. Stereoscopy: based on the slight differences between the left and right image
4. Parallax: the different displacements/motions of objects at different distances (e.g. when you move your head)
5. Visual inference: reconstructing using cues like occlusion, lighting, shadows, etc.

As long as all 5 of those don't agree, the image won't look 'truly 3D': it will seem wrong at in many cases can cause headaches or nausea (your brain is getting conflicting information for which there is no physically-correct solution). The reason that current 3D systems fail is that they don't match all 5. A regular 2D movie (or a photograph, etc.) gives you #5 and that's it. This works actually remarkably well. Glasses-based 3D systems try to trick you by giving each eye a slightly different image, which adds #3, but since 1,2 and 4 are still wrong, the overall effect feels weird: your eyes still have to point at, and focus on, the movie screen. (It's even worse for 3D-TV since you are focusing on something relatively close to you.)

The reason this happens is precisely because a movie/TV screen has spatial resolution (each pixel is different) but no angular resolution (the image on the screen is the same regardless of where your head/eyes are positioned). If you could add back in the angular information (with enough resolution), then you could create an arbitrary light field, that was indistinguishable from a physically-realistic light field. If done right in terms of angular resolution and computing a physically-correct light field, then this would give you 1,2, 3, and 4. (And 5 also, if what's being projected is a realistic scene with proper shadowing and so forth.) If the light field is properly created, each eye will get a slightly different image (since each eye is at a slightly different angle with respect to the screen); these images will change as you move your head around; and your eyes will in fact NOT focus or converge on the location of the screen: they will focus and converge on the virtual image being created by the light field emanating from the screen. (This is similar to a hologram, which can be a two-dimensional sheet and yet reconstruct the light field that would come from a three-dimensional object, and can create virtual images that are not in the plane of the sheet.)

The prototype being demonstrated in this article is not good enough to do that, mind you: they don't have enough angular resolution to trick your eyes. However that's where this technology is headed, and if it's done at high enough resolution, we will finally get proper 3D: where we're not just tricking your eyes, but where we're actually projecting the correct light field towards the viewer.

Comment Re:Similar software (Score 2) 103

LastCalc looks absolutely amazing! I love Google's ability to do on-the-fly math with unit conversion, and it seems that LastCalc is giving us this and more! It's great.

A question for you (or a feature request, I suppose): how do we add more information to the behind-the-scenes taxonomy? For instance, if I go "2*pi*1 nanometers in angstroms" it correctly converts from "nanometers" to "angstroms". However if I use "nm" instead, it doesn't know what I mean. Of course I can add a definition "1 nm = 10 angstroms" and from then on it works correctly... but I don't want to have to add that every time I use LastCalc!

Presumably you have a database behind-the-scenes with taxonomies for various units. Is there any way for end-users to edit that taxonomy (wiki-style), or perhaps submit new relations/data for inclusion? Now that you're open-sourcing this project, it seems like you could take advantage of community involvement to expand and refine the taxonomy, making the system ever-more-powerful. (I see you have a Google Group... so, is the intention that people just discuss this in the that forum? Seems like it would be more efficient to have a wiki or open database where people (even non-programmers) could contribute suggestions for units/relations/etc.)

Anyways, thanks for your efforts on what looks like a great project. I hope you keep it up!

Comment Re:No (Score 5, Insightful) 502

That's, correct, the device is using both electrical and thermal energy input to generate light output.

Now, some people might still be bothered by this, because the idea of using ambient heat to do useful work is another one of those "perpetual motion machine" kind of claims. Heat represents a disordered (high-entropy) state, from which you cannot extract useful work. The relevant thought experiment here is the Brownian ratchet: the idea being that you have a ratchet that gets bombarded by random molecular collisions (in water or air, say). The ratchet will turn foreward when a random collision is strong enough, and so over time you can use this turning motion to wind a spring and thus convert random thermal motion into stored energy. The reason this doesn't work in real life is because if random thermal motion is enough to overcome the pawl on the ratchet, then the pawl will be 'hot' enough that it will randomly and spontaneously lift up, turning the wheel backwards. The only way to avoid this is to have the pawl at a lower temperature than the rest of the mechanism: this works, but it's well-known that you can extract useful work from a thermal gradient, so the laws of thermodynamics remain intact.

Coming back to this present result, how does this device use ambient heat to generate useful photons? Sure, it acts as a thermoelectric cooler, establishing a local thermal gradient, but this sounds like 'cheating' in that it's a way to extract energy from the entropy of the surroundings! The very first sentence of the scientific paper addresses this:

The presence of entropy in incoherent electromagnetic radiation permits semiconductor light-emitting diodes (LEDs) to emit more optical power than they consume in electrical power, with the remainder drawn from lattice heat [1,2].

Basically, the device is converting high-entropy thermal energy into even higher entropy incoherent electromagnetic radiation (light output). So, the second law of thermodynamics is not violated. Essentially, this device is acting as a way to connect thermal degrees of freedom to E&M degrees of freedom. The system, wanting to increase entropy as much as possible, tries to spread energy through all these degrees of freedom, which means creating some photons at the expense of some of the heat in the material.

It's a neat bit of physics, and will probably have implications for device efficiency and other applications.

Slashdot Top Deals

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis