Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:It's not broken... (Score 1) 141

To be clear, the solution isn't even done being implemented yet-- the project is working towards achieving correctness still, and hasn't gotten there yet. After that part is done, the work on optimization begins.

As it turns out, unsurprisingly, implementing a transport protocol which works reliably over the internet in all conditions isn't trivial!

Comment Re:Handled at layer 7 (Score 2) 141

There are definitely people and opinions on both sides of the fence on this.

Unfortunately, though performance might improve with access to the hardware, wide and consistent deployment of anything in the kernel/OS (how many WinXP boxes are there still??) takes orders of magnitude more time than putting something into the application.

So.. we have a problem: We want to try out a new protocol and learn and iterate (because, trust me, it isn't right the first time out!), however can't afford to wait long periods of time between iterations/availability.

Hopefully the project will drive us all towards solutions to these problems that are generic, usable for any new protocol, and which actually work!

Comment Re:Pacing, Bufferbloat (Score 3, Interesting) 141

What seems likely is that when you generate a large burst of back-to-back packets, you are much more likely to overflow a buffer, causing packet loss.
Pacing makes it less likely that you overflow the router buffers, and so reduces the chance of packet loss.

TCP does actually do pacing, though it is what is called "ack-clocked". For every ACK one receives, one can send more packets out. Since the ACKs traverse the network and get spread out in time as they go through bottlenecks, you end up with pacing.... but ONLY when bytes are continually flowing. TCP doesn't end up doing well in terms of pacing out packets when the bytes start flowing and stop and restart, as often happens with web browsing.

Comment Re:SCTP (Score 1) 141

It is similar in some ways, and dissimilar in other ways.
One of the outcomes of the QUIC stuff that is considered a good outcome is that the lessons learned are incorporated into other protocols like TCP, or SCTP.

QUIC absolutely takes security into account, including SYN floods, magnification attacks, etc.

Comment Benchmarking premature; QUIC isn't even 100% coded (Score 5, Informative) 141

As someone working with the project.

The benchmarking here is premature.
The code is not yet implementing the design, it is just barely working at all.

Again, they're not (yet) testing QUIC-- they're testing the very first partial implementation of QUIC!

That being said, it is great to see that others are interested and playing with it.

Comment Seems like a good idea (Score 1) 139

G-sync (i.e. sync originated by the graphics card) seems like a good idea.
It:
    allows for the ability of single or multiple graphics cards within a computer to emulate genlock for multiple monitors, so that the refresh rates and refresh times of those monitors interact properly
    allows for the synchronization of frame rendering and output, i.e. reducing display lag which is important for gamers and realtime applications.
    allows for a graphics card to select the highest possible framerate (possibly under 60hz) when displaying higher resolutions (e.g. 4k or 8k) on cables/interfaces that don't allow for a full 60hz bitrate.

Good stuff.

Comment Re:Who cares (Score 1) 278

I think you mean the other way around. You don't add standards to patents, you use patents over parts of standards. Either that or you mean something about adding patents to patent protection pools?

In any case it isn't true that their patents are worthless when they don't put them in.
Look at Microsoft with the FAT patents.
What the current ruling does is encourage companies with patents to NOT disclose them, or to keep them as patent applications as long as possible (and thus they can be kept secret for longer) only to pull the patent out of the proverbial hat after the technology is established.

Again, the only way to fix this is to provide an ability to remove these patents from the patent holders.

Comment Re:Who cares (Score 1) 278

You are being short-sighted then... because as a natural consequence, companies will not license their IPR for use in standards, and, if that isn't sufficient, they'll not make standards anymore.

The better solution is fixing the damn patent system. A mandated eminent-domain purchase of any patent deemed essential and then allowed for public use, for instance, would be an interesting way out, though fraught with its own problems (who determines the fair price? This is always the problem with any eminent-domain issue). I'd prefer those problems to what we have today, though.

Comment Re:FOSS has the answers (Score 1) 278

What the AC says here seems spot on.

The easy solution for this is for companies to stop releasing whatever people are calling 'standards', and instead let people reverse engineer it (and then sue with their patents), or to provide no FRAND terms when licensing the IPR, or not licensing the IPR.

What they're doing here is not going to incent the proper behaviors out of the actors-- it is pretty short-sighted.
It would be better to abolish these patents for some one-time-fee when they become essential to the economy/public benefit in some way.

Slashdot Top Deals

"It's the best thing since professional golfers on 'ludes." -- Rick Obidiah

Working...