Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Cant compete, but sue. (Score 1) 412

The entire purpose of patents is to block out competitors by securing to inventors exclusive rights over their inventions. Who would invest the money to invent if somebody else can just copy it and not have years of debt to pay off?

And for that matter software patents are good and they encourage innovation. Software is not math, it's a machine. Inventing a new sorting algorithm is no different from inventing a new physical sorting process except it's simulated by a computer. What's really the matter is just obvious patents, absurd patents like 'one click', and all the ones with clear prior art that are granted anyway.

Apple has done a huge amount of research and inventing in many areas and deserves to get advantage from it. The position of the defenders of Android is that blatant copying is okay, making them part of the anti-intellectual crowd that doesn't value ideas and creativity.

Comment Re:When Hubris takes precedence over Brains... (Score 0) 173

Only technically could one argue that Dalvik is "not Java". It isn't exactly the same and doesn't run Java code, not exactly.

But there's nobody technical that can say with good conscience that Dalvik is not simply a blatant copy of the JVM, changed just enough to maybe get around some patents. The byte code is exactly what you would expect from a straight converting of Java bytecode to a register format and the application format is just a bunch of Java class files merged together so there's less redundant strings and such. The basic semantics are entirely the same as Java.

Ultimately what it comes down to is that Google purposely and blatantly worked to get around some patents they considered bogus, creating an incompatible version of Java -- objectively there's simply no denying this. The question is then over whether you feel they were justified to do this or not. The people insisting so loudly that 'Dalvik is not Java' are those that feel that creating an incompatible Java is 'wrong'... but they still want it in their phone.

Comment Re:Detailed info on SPDY (Score 1) 310

Let's say it take 200 ms to generate the dynamic data. In the mean time, the browser can check if it has the newest JS version from my CDN, since it already know it will need it.

If there was an HTTP "RESOURCES" request to go along with GET and HEAD that did the same thing then the browser could send that along with a GET on opening the connection and you would have the same benefit, with the extra benefit of being able to query this information independently from requesting the content.

In any case, if the browser already has it cached then you gain nothing in terms of page loading speed (the browser could speculatively load it anyway) and you can tell the browser to cache these kinds of things for a long time by versioning the URL when they have to change.

So occasionally the browser can start loading a resource sooner than the main page is ready and unless all resources fit into this category (page load being held up by those that aren't known on initial GET) then it on average only reduces the page load time by some factor of the bandwidth for that resource. Not worth it.

Ah, but as you yourself point out, HTTP pipelining can not. People have tried for a long time to get HTTP pipelining to work properly, but so far it's not the solution people are looking for.

I think HTTP pipelining is actually the solution people are looking for. The problem is just that it isn't supported well.

If you take a close look at the SPDY performance numbers on their whitepaper, you'll see that they don't compare it to HTTP pipelining even though the test is conducted with a server and client they control that both support pipelining. The reason? They don't want to say "use SPDY it's 3% faster at loading pages than HTTP pipelining" (made up number).

Further you'll see that they don't compare SPDY over SSL with HTTP over SSL. The reason? They don't want to say that "SPDY w/GZIP compression over SSL sends 2% less data than HTTP over SSL w/DEFLATE compression".

The whole thing is obviously such a sham. They have some other reason for pushing SPDY, maybe just to do it, but whatever it is it's certainly not evidence-based.

Comment Re:SPDY clarifications (Score 1) 310

Once again thank you for taking the time to answer my questions. I wish the answers, in general, were something more substantial than just 'we're Google trust us we're smart durr', since that's essentially what you've written here, for instance:

There is external research on this topic. Feel free to look it up as we did.

I expect somebody who is pushing a new protocol based on published research to be able to at least cite their sources. Even on the web the only research identified are two powerpoint presentations by the same guy, with no review. Frankly, even just based off the fact that you stand behind performance numbers that come from a non- state of the art (even at the time) HTTP stack makes me seriously doubt the quality of research that went into this new protocol.

I don't see any serious response to why the problems in HTTP couldn't just be fixed in the standard instead of being papered-over. I suppose this means unstated is that w3 is too slow or too busy wasting time with RDF and 'semantic web' to improve the spec, so you have no choice but to bypass standards. In any case I would encourage you to collect actual evidence for the need for SPDY and also to at least try alternatives that modify HTTP itself.

Comment Re:Detailed info on SPDY (Score 1) 310

1. It can specify other needed files in header, so browser can start loading them before it gets the http page

This is only a benefit if the resources are already cached locally anyway, so the browser could start loading a .png for instance while the main page html downloads. And there's cost not a benefit if the resource is already in memory. And browsers could do this anyway most of the time by just remembering what resources were used on a page even if the page itself is not cached. So this is an extremely marginal benefit at the cost of some complexity and bandwidth.

2. It can send more than one file at a time. It can mux multiple files, while pipelining only allows you to transfer files one at a time.

If multiple files are multiplexed then when the connection drops then multiple files have to be restarted in the middle instead of just one, and dynamically generated resources need to be redone from scratch. The only practical benefit to sending multiple files at once is that the browser could start processing them with just partial data (with an image header for instance it could allocate the memory for the image). This again is very marginal compared to the transfer time and also more complex.

3. It can prioritize some files over others.

So can pipelining (in general, not "HTTP pipelining"). Browser sends server a list of resource, it replies back sending whole files in smallest-first order for instance. The difference is that if you are sending whole files at once pipelined then you don't have to explicitly state priorities and have mechanism for setting and adjusting them so pipelining is less complex.

4. It can compress the HTTP headers. Modern browsers send their life history, the weather outside, everything that have ever been installed on the computer, every language / encoding it can possibly use, cookies, referer, when it saw the content last, what the etag of that context was, and so on. Header compression can actually make a difference.

So can SSL, which SPDY basically requires anyway because of proxies. Enable deflate compression over SSL and it compresses headers.

You should read the whitepaper at http://www.chromium.org/spdy/spdy-whitepaper [chromium.org] - it's quite interesting.

It's really not that interesting. They make a lot of claims without sources or any support whatsoever and won't answer questions about it, for instance see threads here or on reddit where google employees post about SPDY (ie are reading the thread) but won't answer questions about methods and sources or defend their theories.

The problem that SPDY is supposed to solve in it's overly complicated way has a simple solution: tweak HTTP pipelining so that the server can respond in any order it wants to. Then servers can send resources when they are available, in smallest-first order, or whatever and the pipeline doesn't block on ad.doubleclick.net (it's just sent last when it is finally finished profiling you). That's all there is to it. The HTTP designers didn't do it because they didn't want to go far enough with changes, instead just tacking pipelining on almost as an afterthought. Maybe Google invented SPDY because they are afraid of tweaking HTTP or think standards move too slowly? I don't know, all I know is that SPDY is bad news.

Comment Re:SPDY clarifications (Score 3, Interesting) 310

Since we've got it direct from the horse's mouth -

- Why server push? Nobody seems to think it's a good idea and it makes things more complicated for everybody involved, including proxies. What is the rationale for this feature.

- Why did you name it "SPDY" to show "how compression can help improve speed" when SSL already supports compression?

- In the performance measurements in the whitepaper, what HTTP client did you use and what multiple connection multiplexing method was used if any? How were the results for HTTP obtained? For instance the whitepaper says an HTTP client using 4 connections per domain "would take roughly 5 RTs" to fetch 20 resources, implying theory math. Were situations like 10 small requests finishing in the time it takes to transfer 1 large request taken into account? (ie in practice multiple requests can be made without increasing total page load time)

- The main supposed benefit seems to be requesting more than one resource at once. Then a request could stall the connection while being processed (ie doubleclick ad) and hold up everything after it, so then you add multiplexing, priorities, canceling requests, and all that complication. Why not just send a list of resources and have the server respond back in whatever order they are available? This provides the same bandwidth and latency with superior fault handling (if the connection closes the browser has only one resource partially transferred instead of several).

- The FAQ kind of reluctantly admits that HTTP pipelining basically has the same benefits in theory as SPDY except if a resource takes a while and holds up the remaining ones. So what benefit would SPDY have over just fixing pipelining so that the server can respond in whatever order it chooses? The only real problem with HTTP tunneling is fixed-order and bad implementations (ie IIS), correct?

Barring really good explanations it looks to me like SPDY is just very complicated and increases speed basically as a side-effect of solving other imaginary problems.

Comment Re:Detailed info on SPDY (Score 1) 310

And what we gain is compressed HTTP headers and a requirement to support multiple requests, and multiple streams.

Actually you don't even gain compressed headers. Since SPDY (which is a made up initialism just to be "cute") uses SSL to bypass proxies you could already compress with deflate or whatever else people want to standardize on.

I would much rather like to see anything based on SCTP.

And why hasn't SCTP caught on? Because like SPDY it doesn't solve an actual problem people have.

Comment Re:Anyone else slightly bored of the browser wars? (Score 2) 176

once either side gets a good win (Like IE 5-6 did) then that is where the trouble gets in where the winner separates from the standard and forces its own standards.

Then I'd be most worried about Chrome. IE can only affect the Windows market, which is not even assured will be relevant in the not-so-distant future. Mozilla has a history of open processes and backward compatibility, for instance there was huge debate and rationale before switching to the awesome bar and you can make it 'less awesome' if you want to. Chrome on the other hand is already including custom junk like native client, SPDY (which is a crappy protocol btw), and like gnome they change the UI on a whim because they feel like it not because of user testing and discussion. Also there's no easy way to use a particular version of chrome and Google advocates a 'rolling standard' for HTML (another thing that's rolling is a treadmill).

Comment Re:RealD is a two-plane gimmick (Score 2) 313

RealD Cinema ... gives up to two planes of vision at a time

No it gives two projections of vision at a time, one for each eye... just like your eyes get from the rest of the world (each eye only sees 2d of course). It can be as 3d as actually being there, except that you cannot change the focus; your eyes always should always be focused on the screen regardless of what is projected. This causes eye strain and perception problems for some people.

ReadD does have some problems... mostly people who can perceive at 72 Hz or greater will see some flickering, and with movies made from 24 fps source frames are repeated three times (per eye) so people with fast perception will see some stuttering; frames are flashed in sequence 1 1 1 2 2 2 3 3 3..., so if you can perceive individual frames then you'll see action frozen for two frames out of three.

But yeah, if you are only seeing two planes in a movie like Avatar made from 3d source then your brain is not processing correctly (ie you have some visual perception problem). You may in fact not be perceiving the actual world in 3d as other do. I wouldn't expect you to see anything in those random noise pictures where you cross your eyes for instance.

Comment Re:Interesting proposal; just might work (Score 3, Insightful) 254

Actually, it's not a bad compromise for Google.

FTFY. Customers will get their internet wirelessly, because they move around and want the internet. Phones, iPad, laptop... these all make people want wireless internet.

Business use wired internet, because they have a fixed location and don't need to roam.

So what Google is saying is "don't extort us, but do extort users". This is a perfect world for Google, because with their deep pockets they can bribe wireless carriers to muscle Bing and Apple and whoever else out of the market. But with guaranteed fair wired access, worst case they could start their own wireless service... they would only have to set up the wireless instead of having to potentially own everything in between their servers and the user; if their wireless network had to hook up to Verizon for instance, then without wired neutrality Verizon could make it prohibitively expensive.

In the end, if any part of the network is not neutral, then to users none of it is. Which makes this initiative from Google a case of "do less evil", or worse.

Comment Re:Good luck with that. (Score 1) 764

[Microsoft] came into the [netbook] game late with an inferior product, but used their position to push the hardware manufacturers and retailers to sell XP netbooks instead of Linux netbooks.

Microsoft won with netbooks because they had a better product. Windows XP would have beat out linux on netbook even without any network effects from being able to run Win32 programs.

For example, when I put linux on my netbook it ran at ~12 watts idle whereas Windows XP got ~9 watts idle. So Windows XP had about 1.3x more battery life. Not to mention that XP was better at flash, games, and firefox.

It was possible with a ton of work to get a particular distro to run at the same idle watts by lowering the core clock frequency (not the cpu frequency), but even still it was flaky and broken on distro upgrades.

Comment Re:Coal (Score 2, Interesting) 635

If I have to pay for the negative externalities of the process ... then my process is only competitive for gold prices above $1050 per gram. However, if I can get away with just dumping the toxic water somewhere for free, then at $50 per gram of gold my process is highly competitive

There is another angle to this. If you can improve your solar efficiency by 0.1% but it will cost you $10 million to modify the factory then you need to recoup that $10 million from sales that would otherwise go to competitors or not be made. If you aren't selling much then you have less ability to improve the product.

So the reason we should be investing a lot on solar in the form of subsidies is to grow the market, which will improve the technology as a side effect. The difference between solar and a lot of other green fuels is that there can be large improvements in the efficiency. Even if solar is not the cost effective choice now, we should still invest in it so that it will be.

Comment Re:Pass Phrases (Score 1) 563

Pass phrases are the wrong answer because they have the same weakness as passwords... once the adversary knows it then you are screwed. While you are sleeping, they are using the passphrase they got from you the last time you entered it.

But no matter if your password is "cat" or "password" or "myvoiceismypassportverifyme", if you have to hit a physical button to log in then the worst they can do is hijack that one login. And that's a much harder problem for them and much easier to defend against.

Cracking is not the problem. Software-only credentials are the problem.

Slashdot Top Deals

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...