Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Code (Score 2) 80

Bingo. Imagine an LLVM based optimisation pass that uses profiling data to take a hot code block and translate it to run on the FPGA. Anywhere in your implementation where the CPU core is the bottleneck, rather than memory access. And since it's in the CPU, you could shift from running x86 instructions to raw hardware without the complexity and latency increase of piping data to a GPU or other external device.

Comment Re:What happens if (Score 1) 281

It doesn't work that way. To snatch someone's coins you have to break their private key and produce a transaction signed by it.

You could refuse to include some transactions, or you could spend your own coins twice on two different forks of the block chain. I don't think there are any other ways to game the system.

Comment Re:A model based on social covenants (Score 1) 170

To break up the key, you could just use Reed Solomon error correction. N bits of key + M extra bits for error correction. Then you break it into numbered pieces. Any combination of pieces that provide N bits can be used for recovery. If you assemble more bits, you can even correct some amount of bit rot.

Comment Re:This is news? The stock market is a house of ca (Score 1) 382

There's a strong correlation between share price changes and margin loan acceleration. Most of the "growth" of the share market comes from people borrowing money to speculate, not because those companies are producing more value themselves. Of course, what goes up must come down.

Comment Re:Thermodynamically Impossible (Score 1) 311

Once your passive black road has been covered by white snow, you've lost. The sun's energy will bounce off the snow, you have to wait until the ambient temperature rises enough to melt the snow. If you can use some stored energy to melt the snow, you can start absorbing more energy on your black surface to recharge. You may eventually lose the war and run out of stored energy, but you might be able to delay the inevitable.

Comment Re:what the FEC... (Score 1) 129

I'm thinking of Network Coding Meets TCP. Though that doesn't give a great background. I've experimented with my own implementation, but had to shelve it due to lack of time. I'll try to quickly summarise the core idea;

You have packets [p1, .. pn] in your stream's transmission window.

Randomly pick coefficients [x1, ..., xn] and calculate a new packet = x1*p1 + x2*p2 + , .... (in a galois number field, so '+' is an XOR and '*' is a pre-computed lookup table). Sending the coefficients in the packet header.

The receiver collects the packets, and attempts to combine pairs of packets to reduce the complexity of the coefficients. Basically like solving simultaneous equations. That sounds complicated, but the algorithm isn't too hard;

- Keep the current set of packets, sorted by their coefficients. When you receive an incoming packet, you attempt to subtract each of your existing packets which have a smaller coefficient set (again, galois field math). If you're left with nothing, this packet didn't give you any new information, so throw it away.

- Attempt to subtract this new packet from each of the existing packets in your set. Insert your new packet into the list.

When you have a packet with a most significant coefficient of 1. You know you will be able to decode that packet eventually. And you know that you can eliminate that coefficient from any other incoming packet. So you can send an acknowledgement to the sender and they can advance their transmit window. Once you have eliminated all other coefficients you can deliver the packet to the application. Keep each packet in memory until the sender has advanced their window and stopped sending it.

Each incoming packet may eliminate a coefficient from the packets in your list, while at the same time introducing a new one. If you don't send extra redundant packets you may never be able to decode anything. Consider the worst case where every packet is the XOR of two neighbouring packets in the stream, you can't decode anything until you receive a single packet by itself. Sending more redundant packets will reduce the latency to decode the stream while adding to the number of useless packets received.

Comment Re:Packet loss models? (Score 1) 129

(I haven't read this paper yet, but I've read other Network Coding data and experimented with the idea myself)

With TCP, when a packet is lost you have to retransmit it. You could simply duplicate all packets, like using RAID1 for HDD redundancy. But this obviously wastes bandwidth.

Network coding combines packets in a more intelligent way. More like RAID5 / RAID6 with continuous adaptation based on network conditions. Any packet that arrives may allow you to deduce information that you haven't seen before. Basically each packet is the result of an equation like; f(p1, p2, p3) = a*p1 + b*p2 + c*p3. When each packet arrives, you attempt to solve the set of simultaneous equations you have received. When you have reduced each expression to just a single packet, you send it up the protocol stack.

You still need a TCP like acknowledgement scheme so that you can; rate limit the flow of packets based on measured congestion, tweak the percentage of redundant packets being sent due to measured packet loss, and advance the stream to include new data.

If you get the network coding parameters wrong, the connection still might stall, or you might be sending too much redundant data. If everything is going well, a link with 10% packet loss just means that your stream is transferred 10% slower.

Comment Re:what the FEC... (Score 1) 129

What if you could transmit data without link layer flow control bogging down throughput with retransmission requests

TFS makes it look like network coding can magically send data without any form of ACK & Retransmit. Network coding still requires feedback from a flow control protocol. You need to tweak the percentage of extra packets to send based on the number of useful / useless packets arriving, and you still need to control the overall rate of transmitted packets to avoid congestion. The goal is to make sure that every packet that arrives is useful for decoding the stream, regardless of which packets are lost. So yes, it's a kind of automatically adapting error correction protocol for a stream service.

Slashdot Top Deals

"The following is not for the weak of heart or Fundamentalists." -- Dave Barry

Working...