Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Not so fast...YET (Score 1) 135

It is approximately valid. He put a bandwidth simulator between him and his proxy.

His comment about the average site requiring ~ 30 different SPDY connections seems excessive though. I suspect this is why he's seeing such bad results. Maybe he is assuming no benefit from the removal of domain sharding which providers would likely do if they rolled out SPDY.

Comment Re:How small is small? (Score 1) 137

Neither of those sound like unsolvable problems.

People don't like being bounced around either, so we use suspension and soft seats for them.

The combustion chamber in a gas engine gets to around 1000 degrees C as well, and that isn't a problem. We can get rid of the heat if necessary, or if it needs to stay hot we can stick it in an insulating box. Either way, it sounds doable.

2kW isn't enough for a car though, and if you were to scale it up to the 100kW to fulfil a cars peak demand, it might prove too big and heavy.

Comment Re:No (Score 1) 502

(I suspect the error in this research is the method used to keep the LED at 135 C. If the leads are connected to a circuit which is cooler, they might cause one side of the silicon in the LED to be slightly cooler than the rest. If that were the case, it would form a peltier effect unit, which would drive the LED even without any electrical energy.)

Comment Re:No (Score 1) 502

The laws of thermodynamics say you can't simply extract energy from heat. You have to extract energy from a temperature difference. Seemingly this research goes against that, so I suspect there has been some error in the measurement somewhere until I see further evidence.

Comment Re:They still need a C&C (Score 2) 137

By the way, I think you were mixing up encryption with authentication. You are right that the control messages can't be encrypted, since they must be able to be decrypted by any node in the network, and hence security researchers have access to whatever key they are encrypted with, and can also decrypt them.

They can however be signed (authenticated) to prevent anyone but the real botnet owner from sending them.

(note, all of this is assuming assymetric (eg. RSA) cryptography - where one key is used for encryption, and another for decryption, or equally one key for signing, and another for validating)

Comment Re:They still need a C&C (Score 4, Insightful) 137

I'm not sure about your comments re: keys.

It seems relatively easy to design a botnet to be peer to peer and yet not able to be taken over by a rogue node. Consider a P2P overlay network where each node plays "chineese whispers" and forwards any packet to all neighbours (with some TTL limit).

The botnet owner creates a public private keypair, and uses his private key to sign control messages. Each host takes each incoming packet and checks if it is signed by the botnet-owner, which requires the public key of the botnet owner, and is built into the code. If someone reverse engineers a node, all they have is the public key, so can't sign messages (since signing requires a private key).

An attacker could still DoS this network with unsigned Control messages, but that can easily be thwarted by:
a) never forward any unsigned message
b) forward signed messages only if it's version number is higher than the last forwarded message.

To hide himself and operate the network, the botnet owner can use TOR or some other anonymising service to connect randomly to any node in the network (rather like utorrent DHT does), and send a signed control message with a version number higher than any seen before by the network.

Comment Re:Turing Tax (Score 1) 100

You are indeed correct - it all depends on the codecs, desired psnr and bits/pixel available. For modern codecs, the motion search is the bit that takes most of the computation, and doing it better is a super-linear complexity operation - hence both your numbers and mine could be correct, just for different desired output qualities.

The ratio though is a good approximate rule of thumb. I wonder how this ratio has changed as time has moved on? I suspect it may have become bigger as software focus has moved away from pure efficiency to higher level designs, and cpu's have moved to more power hungry super-scalar architectures, but would like some data to back up my hypothesis.

Comment Re:let me answer that with a question (Score 1) 100

With an exaflop computer, simulating the human brain is looking like it might be possible. If we can get a simulated brain working as well as a real brain, there's a good chance we can make it better too, because our simulated brain won't have the constraints hat real brains have (ie. not limited by power/food/oxygen supply, not limited by relatively slow neurones and doesn't have to deal with cell repair and disease)

Basically, if current models of the brain are anywhere near correct, and current estimates of computation growth are close, it seems there is a real possibility of a fully simulated skynet in 30-40 years.

Comment Turing Tax (Score 5, Interesting) 100

The amount of computation done per unit energy, isn't really the issue. Instead the problem is the amount of _USEFUL_ computation done per unit energy.

The majority of power in a modern system goes into moving data around, and other tasks which are not the actual desired computation. Examples of this are incrementing the program counter, figuring out instruction dependancies, and moving data between levels of caches. The actual computation of the data is tiny in comparison.

Why do we do this then? Most of the power goes to what is informally called the "Turing Tax" - the extra things required to allow a given processor to be general purpose - ie. to compute anything. A single purpose piece of hardware can only do one thing, but is vastly more efficient, because all the power used figuring out which bits of data need to go where can all be left out. Consider it like the difference between a road network that lets you go anywhere and a road with no junctions in a straight line between your house and your work. One is general purpose (you can go anywhere), the other is only good for one thing, but much quicker and more efficient.

To get nearer our goal, computers are getting components that are less flexible. Less flexibility means less Turing Tax. For example video encoder cores can do massive amounts of computation, yet they can only encode video - nothing else. For comparison, an HD video camera can record 1080p video in real time with only a couple of Watts. A PC (without hardware encoder) would take 15 mins or so to encode each minute of HD video, using far more power along the way.

The future of low power computing is to find clever ways of making special purpose hardware to do the most computationally heavy stuff such that the power hungry general purpose processors have less stuff left to do.

Comment Re:What about pipelining and keep-alive? (Score 1) 275

Not quite. Pipelining requires responses to be delivered in the same order as the requests. This is fine if all the responses are available immediately (eg. static css and images), but for dynamic content such as php a delay generating the content will not only delay that request but also all following requests.

One main advantage of SPDY is http header compression which should reduce upstream bandwidth to about a quarter of what it currently is for web browsing - and while bandwidth isn't that important anymore, using fewer packets means less chance of a lost packet, because lost packets are a major slowdown for page loads - imagine those web pages that seem to take ages to load, but then load instantly when you hit refresh - that was probably a lost packet very early in an http stream.

In the future, spdy "push" requests allow a server to send stuff the client is expected to need but hasn't yet asked for. This could, theoretically, allow a web page to load in fractions of a second because one doesn't have to parse the http document and run javascript just to find out which resources to load next.

Also, pipelining support on servers is so unreliable that browser manufacturers don't dare do some of the things allowed by the spec because it would break too many servers - hence a new spec is preferable to encouraging use of an old one.

Comment Not a great example of a data dump (Score 4, Informative) 643

It seems, looking at the raw data, that while "40G's" is quoted by the summary, and words like "totalled" are used, the data recorded by the box only shows a 15MPH crash.

There is other dubious data - for example, the box sensors indicate that the box accelerated by 22MPH while the data was being retrieved - ie. while sitting on some investigators desk - seems unlikley!

The crash acceleration data itself contains some very high amplitude high frequency oscillations - with a frequency around 200Hz. These are much bigger than the crash itself. That could be vibrations going through the car after something goes "twang", but could even be the stereo bass turned up loud. These vibrations are where the "40g" comes from - the actual crash is more like 1 or 2 g.

Note however there may be more information that wasn't recorded.

Comment Re:Sound waves don't carry enough power (Score 1) 290

You are right. They will have a standard power supply connected in series with a carbon microphone on one phone and a speaker on the other phone. Neither phone receives power, because the centralised switching station provides it. It probably has a tiny battery backup so it still works when the rest of the power on the ship has failed.

Slashdot Top Deals

"Who alone has reason to *lie himself out* of actuality? He who *suffers* from it." -- Friedrich Nietzsche

Working...