Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Turing Tax (Score 1) 100

You are indeed correct - it all depends on the codecs, desired psnr and bits/pixel available. For modern codecs, the motion search is the bit that takes most of the computation, and doing it better is a super-linear complexity operation - hence both your numbers and mine could be correct, just for different desired output qualities.

The ratio though is a good approximate rule of thumb. I wonder how this ratio has changed as time has moved on? I suspect it may have become bigger as software focus has moved away from pure efficiency to higher level designs, and cpu's have moved to more power hungry super-scalar architectures, but would like some data to back up my hypothesis.

Comment Re:let me answer that with a question (Score 1) 100

With an exaflop computer, simulating the human brain is looking like it might be possible. If we can get a simulated brain working as well as a real brain, there's a good chance we can make it better too, because our simulated brain won't have the constraints hat real brains have (ie. not limited by power/food/oxygen supply, not limited by relatively slow neurones and doesn't have to deal with cell repair and disease)

Basically, if current models of the brain are anywhere near correct, and current estimates of computation growth are close, it seems there is a real possibility of a fully simulated skynet in 30-40 years.

Comment Turing Tax (Score 5, Interesting) 100

The amount of computation done per unit energy, isn't really the issue. Instead the problem is the amount of _USEFUL_ computation done per unit energy.

The majority of power in a modern system goes into moving data around, and other tasks which are not the actual desired computation. Examples of this are incrementing the program counter, figuring out instruction dependancies, and moving data between levels of caches. The actual computation of the data is tiny in comparison.

Why do we do this then? Most of the power goes to what is informally called the "Turing Tax" - the extra things required to allow a given processor to be general purpose - ie. to compute anything. A single purpose piece of hardware can only do one thing, but is vastly more efficient, because all the power used figuring out which bits of data need to go where can all be left out. Consider it like the difference between a road network that lets you go anywhere and a road with no junctions in a straight line between your house and your work. One is general purpose (you can go anywhere), the other is only good for one thing, but much quicker and more efficient.

To get nearer our goal, computers are getting components that are less flexible. Less flexibility means less Turing Tax. For example video encoder cores can do massive amounts of computation, yet they can only encode video - nothing else. For comparison, an HD video camera can record 1080p video in real time with only a couple of Watts. A PC (without hardware encoder) would take 15 mins or so to encode each minute of HD video, using far more power along the way.

The future of low power computing is to find clever ways of making special purpose hardware to do the most computationally heavy stuff such that the power hungry general purpose processors have less stuff left to do.

Comment Re:What about pipelining and keep-alive? (Score 1) 275

Not quite. Pipelining requires responses to be delivered in the same order as the requests. This is fine if all the responses are available immediately (eg. static css and images), but for dynamic content such as php a delay generating the content will not only delay that request but also all following requests.

One main advantage of SPDY is http header compression which should reduce upstream bandwidth to about a quarter of what it currently is for web browsing - and while bandwidth isn't that important anymore, using fewer packets means less chance of a lost packet, because lost packets are a major slowdown for page loads - imagine those web pages that seem to take ages to load, but then load instantly when you hit refresh - that was probably a lost packet very early in an http stream.

In the future, spdy "push" requests allow a server to send stuff the client is expected to need but hasn't yet asked for. This could, theoretically, allow a web page to load in fractions of a second because one doesn't have to parse the http document and run javascript just to find out which resources to load next.

Also, pipelining support on servers is so unreliable that browser manufacturers don't dare do some of the things allowed by the spec because it would break too many servers - hence a new spec is preferable to encouraging use of an old one.

Comment Not a great example of a data dump (Score 4, Informative) 643

It seems, looking at the raw data, that while "40G's" is quoted by the summary, and words like "totalled" are used, the data recorded by the box only shows a 15MPH crash.

There is other dubious data - for example, the box sensors indicate that the box accelerated by 22MPH while the data was being retrieved - ie. while sitting on some investigators desk - seems unlikley!

The crash acceleration data itself contains some very high amplitude high frequency oscillations - with a frequency around 200Hz. These are much bigger than the crash itself. That could be vibrations going through the car after something goes "twang", but could even be the stereo bass turned up loud. These vibrations are where the "40g" comes from - the actual crash is more like 1 or 2 g.

Note however there may be more information that wasn't recorded.

Comment Re:Sound waves don't carry enough power (Score 1) 290

You are right. They will have a standard power supply connected in series with a carbon microphone on one phone and a speaker on the other phone. Neither phone receives power, because the centralised switching station provides it. It probably has a tiny battery backup so it still works when the rest of the power on the ship has failed.

Comment The idea of removing impurities is cool... (Score 3, Informative) 93

The idea of removing impurities using light is cool if it increases the efficiency of the completed pannel.

The premise of saving energy in the manufacture of the panels isn't really relevant. Currently producing silicon uses lots of energy, but it needen't really. The process really only involves heating and cooling of relatively small volumes of silicon, and if you were to design a machine to do it continuously, you could do it with nearly no energy. The raw materials are cold, the output is cold, and the processing in the middle is hot - use the energy from the finished product cooling down to heat new raw materials in a continuous process, as already done in a water Heat Exchanger.

The reason this currently isn't done is because energy is a tiny cost in the production of silicon, and other things are far more important than recapturing a tiny amount of energy while the silicon cools down.

Comment I can't see this happening... (Score 1) 439

While there is a small hacker subculture, and while they ever innovate and add features people want, the public (or at least some of them) will flock to the more open devices.

It isn't exactly something we can write laws about, because enforcement is hard, and it isn't something that is going to become law in every single country...

Comment It seems debugging spacecraft is too hard... (Score 5, Interesting) 117

Plenty of good spacecraft suffer software malfunctions and fail as a result, and most failures end up with the craft not returning any data about what went wrong. Future crafts end up sent with exactly the same problems because we never find out about them.

There already exist plans for tiny satellites which can transmit a radio signal back to earth - eg. the Kicksat :http://www.kickstarter.com/projects/251588730/kicksat-your-personal-spacecraft-in-space

Why not glue lots of these Kicksats, self powered, to the outside of any spacecraft - maybe connect a few to internal data systems to collect more data. Now if the spacecraft blows up, if even a few survive the explosion, their radio signals can be tracked precisely by a reverse-gps scheme (where you triangulate exact position from many ground stations) allowing a realtime 3D model of the parts of the spacecraft which have kicksats on to be produced. Since some have connections to the internal monitoring systems, if only a few survive they can transmit data back to the ground very slowly over the next few days (very slowly since they have very limited transmission power)

Comment Why the recall...? (Score 1) 180

It appears the problem only occurs with some chips, after some years, and where the issue occurs, it will only affect some of the SATA ports.

To me, it sounds like the best course of action from Intels point of view would to be to replace any failed chip when the user complains. The majority of users will never come across the issue, since most users won't have 3 or more SATA devices, and of those users that do, many will probably never get the problem, or if they do it'll be after the warranted period.

If I were Intel, I definitely wouldn't be recalling any chips that were already soldered onto a board without a direct user complaint. It might be fair enough to recall chips that aren't yet on a PCB, since then the cost is much lower, although even then I would just recall them, and then sell them again as slightly cheaper 2 port versions.

The refund procedure could be entirely Intel handled where all returned boards get dumped directly into "recycling", and the user sent a check for $100 (or more if they include a receipt for the board).

Since this ISN'T what Intel is doing, it makes me suspect there is something else more serious wrong with them...

Comment CPU power (Score 1) 378

CPU power and ease of design is the main thing. JPEG is specially designed to allow encoding and decoding with very little memory bandwidth, at the expense of compression ratio.

The reason is because JPEG can be encoded in 16x16 pixel blocks. No block depends on any other block, which allows encoders to only worry about a tiny part of the image at once. This allows each part of the uncompressed image to be read from RAM exactly once, and temporary intermediate "state" data is a fixed size which isn't proportional to the image size, which makes hardware design easy. Also, since there is independence between blocks, hardware encoders can be made which process many parts of an image simultaneously - thats how a cheap camera can compress a multi-megapixel image to jpeg in a fraction of a second.

The disadvantage of this is if all the 16x16 blocks in an area are very similar (for example a repeating pattern), the encoder can't know this, so there is minimal compression advantage. (I say minimal because another stage of jpeg compresson, the coefficient huffman tree might possibly help in some rare cases)

WebP on the other hand I expect chucks this idea out of the window in favour of compression ratio, but at the same time is also chucking out of the window the possibility of fast cheap simple hardware encoders and decoders.

Note that single-core software encoders probably aren't affected as much by this, and today most image decoding is single threaded and software based, so the impact may not be as large as I made out.

Comment Re:So... is this different from Linux KVM w/ KMS? (Score 1) 129

Using trampolines for every cross-library call seems very inefficient...

The windows method seems better for the more common case, where it does the costly rewriting at library load time, and then avoids an extra jump for every library function call.

Whats the performance impact of this? I bet it's at least a couple of percent, which is significant if it's across the entire system.

Slashdot Top Deals

Work continues in this area. -- DEC's SPR-Answering-Automaton

Working...