Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:What happens if (Score 1) 281

It doesn't work that way. To snatch someone's coins you have to break their private key and produce a transaction signed by it.

You could refuse to include some transactions, or you could spend your own coins twice on two different forks of the block chain. I don't think there are any other ways to game the system.

Comment Re:A model based on social covenants (Score 1) 170

To break up the key, you could just use Reed Solomon error correction. N bits of key + M extra bits for error correction. Then you break it into numbered pieces. Any combination of pieces that provide N bits can be used for recovery. If you assemble more bits, you can even correct some amount of bit rot.

Comment Re:This is news? The stock market is a house of ca (Score 1) 382

There's a strong correlation between share price changes and margin loan acceleration. Most of the "growth" of the share market comes from people borrowing money to speculate, not because those companies are producing more value themselves. Of course, what goes up must come down.

Comment Re:Thermodynamically Impossible (Score 1) 311

Once your passive black road has been covered by white snow, you've lost. The sun's energy will bounce off the snow, you have to wait until the ambient temperature rises enough to melt the snow. If you can use some stored energy to melt the snow, you can start absorbing more energy on your black surface to recharge. You may eventually lose the war and run out of stored energy, but you might be able to delay the inevitable.

Comment Re:what the FEC... (Score 1) 129

I'm thinking of Network Coding Meets TCP. Though that doesn't give a great background. I've experimented with my own implementation, but had to shelve it due to lack of time. I'll try to quickly summarise the core idea;

You have packets [p1, .. pn] in your stream's transmission window.

Randomly pick coefficients [x1, ..., xn] and calculate a new packet = x1*p1 + x2*p2 + , .... (in a galois number field, so '+' is an XOR and '*' is a pre-computed lookup table). Sending the coefficients in the packet header.

The receiver collects the packets, and attempts to combine pairs of packets to reduce the complexity of the coefficients. Basically like solving simultaneous equations. That sounds complicated, but the algorithm isn't too hard;

- Keep the current set of packets, sorted by their coefficients. When you receive an incoming packet, you attempt to subtract each of your existing packets which have a smaller coefficient set (again, galois field math). If you're left with nothing, this packet didn't give you any new information, so throw it away.

- Attempt to subtract this new packet from each of the existing packets in your set. Insert your new packet into the list.

When you have a packet with a most significant coefficient of 1. You know you will be able to decode that packet eventually. And you know that you can eliminate that coefficient from any other incoming packet. So you can send an acknowledgement to the sender and they can advance their transmit window. Once you have eliminated all other coefficients you can deliver the packet to the application. Keep each packet in memory until the sender has advanced their window and stopped sending it.

Each incoming packet may eliminate a coefficient from the packets in your list, while at the same time introducing a new one. If you don't send extra redundant packets you may never be able to decode anything. Consider the worst case where every packet is the XOR of two neighbouring packets in the stream, you can't decode anything until you receive a single packet by itself. Sending more redundant packets will reduce the latency to decode the stream while adding to the number of useless packets received.

Comment Re:Packet loss models? (Score 1) 129

(I haven't read this paper yet, but I've read other Network Coding data and experimented with the idea myself)

With TCP, when a packet is lost you have to retransmit it. You could simply duplicate all packets, like using RAID1 for HDD redundancy. But this obviously wastes bandwidth.

Network coding combines packets in a more intelligent way. More like RAID5 / RAID6 with continuous adaptation based on network conditions. Any packet that arrives may allow you to deduce information that you haven't seen before. Basically each packet is the result of an equation like; f(p1, p2, p3) = a*p1 + b*p2 + c*p3. When each packet arrives, you attempt to solve the set of simultaneous equations you have received. When you have reduced each expression to just a single packet, you send it up the protocol stack.

You still need a TCP like acknowledgement scheme so that you can; rate limit the flow of packets based on measured congestion, tweak the percentage of redundant packets being sent due to measured packet loss, and advance the stream to include new data.

If you get the network coding parameters wrong, the connection still might stall, or you might be sending too much redundant data. If everything is going well, a link with 10% packet loss just means that your stream is transferred 10% slower.

Comment Re:what the FEC... (Score 1) 129

What if you could transmit data without link layer flow control bogging down throughput with retransmission requests

TFS makes it look like network coding can magically send data without any form of ACK & Retransmit. Network coding still requires feedback from a flow control protocol. You need to tweak the percentage of extra packets to send based on the number of useful / useless packets arriving, and you still need to control the overall rate of transmitted packets to avoid congestion. The goal is to make sure that every packet that arrives is useful for decoding the stream, regardless of which packets are lost. So yes, it's a kind of automatically adapting error correction protocol for a stream service.

Comment Re:From the article... (Score 2) 339

Modern compilers are amazing tools for optimising down to efficient machine code. But every step of the optimisation pipeline has been carefully designed, there's no strong AI there. Just a lot of heuristics.

In comparison, designing hardware still seems like a very manual process. IMHO there's plenty of room for automation improvements. But then, there are less people looking at the problem.

I could totally see a future where software is "compiled" into a mixture of CPU like, GPU like and FPGA like instructions without manual intervention. Or, if you really need the extra performance, output a design for an ASIC chip.

I'd predict that every step of the compilation pipeline would be as obvious and understandable as current compiler tools. With engineers working on optimisation passes at every layer. Tweaking the process to optimise for power consumption and/or gate count. It's just a question of finding the right motivation to do it.

Comment Re:Moving goal posts (Score 1) 220

DNSSEC should give you confidence that the person who currently "owns" the domain name is the same person who "owns" the server you're talking to. That should be enough for most casual connections. But it also puts all of your security in one basket. Take over the domain entry and you control everything.

So the next obvious step is to get multiple independent authorities to verify your identity, sign your key and provide that information via DNS and / or at connection establishment time. Then we should raise the bar for displaying a "This connection is secure and verified" in the browser to some minimum number of highly reputable signatures.

Comment Re:Encryption (Score 1) 220

There's a simpler solution. Keep using GZip compression, but expose the sensitivity of strings to the compression layer. You could ensure that sensitive strings are transmitted as separate deflate blocks without any compression at all, and ignored for duplicate string elimination. All HTTP/2 would need to specify is the ordering of these values so that the compression can still be reasonably efficient for everything else.

Slashdot Top Deals

To the systems programmer, users and applications serve only to provide a test load.

Working...