Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Just in time too. (Score 1) 267

> 99% of the computing needs of 99% of the people can be met by the existing

You know, this phrase has been uttered so many times it became completely meaningless. Please define "Computing needs"?

If you had asked someone in the 50ies, they would have told you that the average person needs some help with adding the numbers for checkbook balancing. So a simple calculator should be enough, right? Nobody would have considered that people of 2000ies would deem it a worthwhile endeavour to use processing power that exceeds the computing power of the entire civilization in the 50ies by orders of magnitudes, simply for gaming.

Today people use a different frame of reference based on the applications they know today. The mistake is still the same.

Why do we need more of moores law?
- Right now everything is about the internet of things. Moores law is not only about transistor density, it is also about power. We need extremely low power computing. The trillion sensor revolution is not a joke. It is happening right now.
- We are still orders of magnitude away in computing power from anything required to make truely intelligent system. There are huge research projects right now, pushing the understanding of the human brain (the EU human brain project for example). If you want your fridge to be as intelligent as a dog, you may want it to have more computing power than your current pc or smartphone.

Comment Re:I will believe it when I can buy it (Score 1) 107

>Every one of the inventions is being pulled forward. It is clear you have no idea what's available out there. Thin film is beginning to dominate commercial installation, >in fact it's so much better that it's very difficult to even purchase thin films any more because all the production is allocated to commercial installations. Other

Bullshit. Most thin film technologies are DOA. There are two technologies that seem to suceed in the market: CdTe and CIGS. However, due to their low conversion efficiency they are only used in big projects. You are not going to see thin film in residential anytime soon. In fact First Solar, thin film market leader, recently acquired a crystalline silicon company to introduce a product in that sector.

>techniques are out there and being used, the better the cell the more likely it'll be relegated to commercial installation. Most of what's available for retail purchase >is the output of older cell lines that are no longer competitive on the commercial side.

It would be nice if market consolidation were drive by technological differentiation, but sadly that is not the case. Older cell lines are only uncompetitive due to bad scaling effects (low throughput, low degree of automation etc.), not due to their cell technology.

Comment Re:This has got to be the 37th amazing improvement (Score 2) 107

>They do, all the time.

In fact, they never do at all! If you look at the market statistics, you will notice that >80% of the market is crystalline silicon. And while there are different ways to manufacture crystalline silicon solar cells, companies have been extremely reluctant to introduce new technologies. In fact, almost all solar cells today are still made with the same manufacturing process steps as 10 years ago. Conversion efficiencies have improved simply by tweaking these process steps.

>Why do you think the cost of solar has decreased by 90% over the last 30 years?

I know why the cost has decreased
- Manufacturing cost reduction by scaling effects
- Very significant cost reduction in raw materials
- Reduction of material consumption by process optimization
- And to a smaller part, improvement of conversion efficiencies by process optimization.

News about surface plasmonic effects, black silicon and the like are surface every other weak. However they have not inched any closer to production than they were 5 years ago.

Photovoltaic modules are a commodity. The technology and science behind it is of limited depth and not comparable to the semiconductor industry. Look elsewhere if you want to innovate in technology.

What is needed is innovation on the system level, products and marketing.
 

Comment WTF is this shit? (Score 0) 334

WTF is this shit? I don't even know where to start...

1) Bad audio quality
2) Bad video quality
3) Cat background?
4) Some guy who never in his life got close to a woman who would qualify as a "booth babe" talking about "booth babes"?

This is just pathetic.

Know your limits. Know your audience. Even if Slashdot is dieing, please do it with dignity!

Man I really wish for Slashdot going down with a story about how 2017 is finally the year of Linux on the Desktop.

Comment Re: 1000 times better? (Score 3, Informative) 103

Some people do not seem to understand the term "quantum efficiency" (QE).

The quantum efficiciency measures the fraction of photons that are actually detected by the camera.
An external quantum efficiency of 50% means that 50% of all incident photons are converted into electron-hole pairs and can be detected.
There are, however, loss mechanisms that prevent all e-h pairs to be collected. But this is not off by a factor of 1000x from the theoretical limit.

As already stated by the original poster. This figure is probebably for some other wavelengths, like far infrared, where silicon is "blind" due to its band gap.
Since humans are very blind to this wavelengths as well, the relevance in the cameras is questionable.

Comment Re:Ask IBM why they left . . . ? (Score 1) 111

I am pretty sure IBM did not leave due to any reason directly related to the location. Semiconductor fabs can have a relatively short lifetime, depending on the technology. The IBM fab had been in operation for decades, if I am not mistaken.

If you want a leading edge fab, it is quite possible that some technology changes (e.g. wafer size conversion) make it uneconomical to upgrade an existing fab. In that case you need to build a new shell. Locations for new fabs are often significantly influenced by incentive payments from the local government. For example the new globalfoundries fab in new york state got billions of incentive payments. IBM most likely decided to discontinue the site after moving the products to a more modern fab that was build somewhere where they got more money...

Comment Re:More BS (Score 2) 261

Oh man, I always hated April fools day on slashdot, because all frontpage articles would be "jokes". It became even worse when slashdot started lagging all the other aggregators in speed. So, today they found a way to even top that with this stupid rot13 shit.

Is today the day slashdot jumped the shark? Probably not, because i see that this article only got around 160 comments. That used to be different five or ten years ago.

Slashdot is dead. The founders knew when to leave, but it is a pity the current owners let it rot as the zombie it is.

Comment A thing of the past (Score 1) 362

Hey, I know Slashdot is somewhat retro, but this submission really seems like a thing of the past. You know, when we all were still 14 in 1998, when Slashdot was still the most awesome website on the interwebs? In 1998 I could still guzzle down a couple of cans of soda a day without worrying about my weight. But it is not 1998 any more and I am not 14 any more. I bet the average visitor of slashdot is in his thirties by now.

So, who in their right mind is interesting in learning about drinking sugar-water in the morning, even if it is infused by fruit juice? How is it news that some obesity-spreading conglomerate is launching another attack on the nations health?

Come on Slashdot. I have not complained for years, but WTF!

Comment Re:Just another step closer... (Score 1) 205

You make good points. However, I think you're somewhat mischaracterizing the modern theories that include parallel universes.

So long as we use the real physicists definitions and not something out of Stargate SG1, those parallels will always remain undetectable. SF writers tell stories about interacting with other universes - physicists define them in ways that show they can't be interacted with to be verified.

(emphasis added) Your implication is that physicists have invented parallel universes, adding them to their theories. In actuality, parallel realities are predictions of certain modern theories. They are not axioms, they are results. Max Tegmark explains this nicely in a commentary (here or here). Briefly: if unitary quantum mechanics is right (and all available data suggests that it is), then this implies that the other branches of the wavefunction are just as real as the one we experience. Hence, quantum mechanics predicts that these other branches exist. Now, you can frame a philosophical question about whether entities in a theory 'exist' or whether they are just abstractions. But it's worth noting that there are plenty of theoretical entities that we now accept as being real (atoms, quarks, spacetime, etc.). Moreover, there are many times in physics where, once we accept a theory as being right, we accept its predictions about things we can't directly observe. Two examples would be: to the extent that we accept general relativity as correct, we make predictions about the insides of black holes, even though we can't ever observe those areas. To the extent that we accept astrophysics and big-bang models, we make predictions about parts of the universe we cannot ever observe (e.g. beyond the cosmic horizon).

An untestable idea isn't part of science.

Indeed. But while we can't directly observe other branches of the wavefunction, we can, through experiments, theory, and modeling, indirectly learn much about them. We can have a lively philosophical debate about to what extent we are justified in using predictions of theories to say indirect things are 'real' vs. 'abstract only'... but my point is that parallel realities are not alone here. Every measurement we make is an indirect inference based on limited data, extrapolated using a model we have some measure of confidence in.

Occam's Razor ...

Occam's Razor is frequently invoked but is not always as useful as people make it out to be. If you have a theory X and a theory X+Y that both describe the data equally well, then X is better via Occam's Razor. But if you're comparing theories X+Y and X+Z, it's not clear which is "simpler". You're begging the question if you say "Clearly X+Y is simpler than X+Z! Just look at how crazy Z is!" More specifically: unitary quantum mechanics is arguably simpler than quantum mechanics + collapse. The latter involves adding an ad-hoc, unmeasured, non-linear process that has never actually been observed. The former is simpler at least in description (it's just QM without the extra axiom), but as a consequence predicts many parallel branches (it's actually not an infinite number of branches: for a finite volume like our observable universe, the possible quantum states is large but finite). Whether an ad-hoc axiom or a parallal-branch-prediction is 'simpler' is debatable.

Just about any other idea looks preferrable to an idea that postulates an infinite number of unverifiable consequents.

Again, the parallel branches are not a postulate, but a prediction. They are a prediction that bother many people. Yet attempts to find inconsistencies in unitary quantum mechanics so far have failed. Attempts to observe the wavefunction collapse process have also failed (there appears to be no limit to the size of the quanum superposition that can be generated). So the scientific conclusion is to accept the predictions of quantum mechanics (including parallel branches), unless we get some data that contradicts it. Or, at the very least, not to dismiss entirely these predictions unless you have empirical evidence against either them or unitary quantum mechanics itself.

Comment Re:Can't have it both ways (Score 1) 330

I disagree. Yes, there are tensions between openness/hackability/configurability/variability and stability/manageability/simplicity. However, the existence of certain tradeoffs doesn't mean that Apple couldn't make a more open product in some ways without hampering their much-vaunted quality.

One way to think about this question to analyze whether a given open/non-open decision is motivated by quality or by money. A great many of the design decisions that are being made are not in the pursuit of a perfect product, but are part of a business strategy (lock-in, planned obsolescence, upselling of other products, DRM, etc.). I'm not just talking about Apple, this is true very generally. Examples:
- Having a single set of hardware to support does indeed make software less bloated and more reliable. That's fair. Preventing users from installing new hardware (at their own risk) would not be fair.
- Similarly, having a restricted set of software that will be officially supported is fine. Preventing any 'unauthorized' software from running on a device a user has purchased is not okay. The solution is to simply provide a checkbox that says "Allow 3rd party sources (I understand this comes with risks)" which is what Android does but iOS does not.
- Removing seldom-used and complex configuration options from a product is a good way to make it simpler and more user-friendly. But you can easily promote openness without making the product worse by leaving configuration options available but less obvious (e.g. accessed via commandline flags or a text config file).
- Building a product in a non-user-servicable way (no screws, only adhesives, etc.) might be necessary if you're trying to make a product extremely thin and slick.
- Conversely, using non-standard screws, or using adhesives/etc. where screws would have been just as good, is merely a way to extract money from customers (forcing them to pay for servicing or buy new devices rather than fix old hardware).
- Using bizarre, non-standard, and obfuscated file formats or directory/data-structures can in some cases be necessary in order to achieve a goal (e.g. performance). However in most cases it's actually used to lock-in the user (prevent user from directly accessing data, prevent third-party tools from working). E.g. the way that iPods appear to store the music files and metadata is extremely complex, at least last time I checked (all files are renamed, so you can't simply copy files to-and-from the device). The correct solution is to use open formats. In cases where you absolutely can't use an established standard, the right thing to do is to release all your internal docs so that others can easily build upon it or extend it.

To summarize: yes, there are cases where making a product more 'open' will decrease its quality in other ways. But, actually, there are many examples where you can leave the option for openness/interoperability without affecting the as-sold quality of the product. (Worries about 'users breaking their devices and thus harming our image' do not persuade; the user owns the device and ultimately we're talking about experience users and third-party developers.) So, we should at least demand that companies make their products open in all those 'low-hanging-fruit' cases. We can then argue in more detail about fringe cases where there is really a openness/quality tradeoff.

Slashdot Top Deals

One man's constant is another man's variable. -- A.J. Perlis

Working...