Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×

Comment Poor Survey (Score 2) 143

I clicked through to the detailed report (which was about lots of other things), and they didn't classify the results by at least iOS/Android/Windows Phone, or even better by manufacturer.

It's very possible 99% of Google and Apple device users update the OS as quick as possible, and 0% of Samsung/HTC/etc. users update (because there are none), and so this doesn't tell us anything.

Plus, I would answer "when it's convenient for me", meaning always within a day or so.

It's like they phrased questions to get results to give the most click-baity headlines. This is my shocked face.

Comment Re:Weak process improvement/Few ideas waiting (Score 1) 474

Yes, this post is right.

I'd say the #1 reason is power. Until about 1999, high-end designs could ignore power, and do crazy things that were fast, but burned power. That all has come to an end. I imagine at 14nm, you could create circuits that created over 1500W on one chip. Just imagine the fastest transistors all toggling at the maximum rate. That just cannot be cooled, or even powered. I don't know what 2000 amps to one chip would look like. So, with a limit of about 130W of power per chip (cooling and just the amps involved) has restricted chip design, even at the high end.

Note if you look at individual transistors, they are still speeding up, you just can't use them all at once. So this is the practical end of the effective law of doubling process performance every 2 years--so the overall speed improvements are much smaller.

Comment Re:RFC5961 is flawed (Score 1) 115

If you just want to know if a connection exists, the exact global rate limit doesn't matter.

Let's say we're sending 5000 packets/sec to try to probe if a connection exists. And then we send 5 to see if we got it right (whatever you think the minimum per-second global throttle rate can be). If we get >= 3 on our test channel, then the probe is not for a valid connection. It doesn't matter if the server's throttle is to 5 per second or 200 per second.

A per-connection rate limit does fix the DDOS amplification issue. Yes, there may be lots of sockets open on a server, but hitting each socket with the right traffic to generate these ACKs is hard.

Fixing this on the server by limiting each socket's challenge-ACK rate is easy: have a global counter of seconds somewhere in the OS. Have each socket hold a "seconds value of last challenge ACK". If we want to send a chellenge ACK, only do it if the last_seconds_ack != global_seconds. This limits each socket to one per second. It's not even important to have the counter be large--it can be a byte (or even one bit), since it's only necessary to try to prevent sending many within the same second. And this is just to prevent magnification DDOS, which in this case may not even be necessary since it's so hard to generate the right traffic, and these ACKs are so small.

Comment RFC5961 is flawed (Score 5, Informative) 115

I read the brief article, and read RFC5961, and here's a quick summary:

A TCP connection is uniquely identified by the combination of: Source IP address; Destination IP address; Source port number; Destination port number. TCP also has a sequence number, which helps reorder packets. It also helps prevent spoofing, but spoofing is still possible. Any computer on the internet can craft a packet to send to a Destination IP address and Destination port with all the other fields spoofed. A spoofer cannot receive a reply (the Destination machine will send any replies to the indicated Source IP address, which the spoofer cannot see).

So, it's possible to inject SYN and RST requests into valid streams, shutting down other people's connections (although you couldn't be sure you've succeeded). RFC5961 tries to prevent this by adding some cases where the SYN/RST are not treated as valid, but instead it sends a special ACK to the source requesting confirmation. To avoid denial of service, these special ACKs are rate-limited to 10 per 5 seconds. Note these special ACKs are only generated if the SYN or RST look "nearly" valid based on the sequence number, otherwise the RST or SYN is ignored.

And that's what they discovered is the problem--open your own connection to any Destination which has long-term connections (and they picked a USA Today website, but anything would work), and every 2 seconds try to get it to generate those special SYN/RST ACKs. If it's not under "attack", you'll get your ACKs.

Then, spoof billions of packets using a chosen source IP address (and loop through all sequence nums and port nums) and guess the dest port (say, 80).

Now send a little traffic to the Destination on your valid connection to try to get these special SYN/RST ACKs. If you don't get your ACKs (due to the global rate limiting), then you "know" that you've stumbled upon a valid combination of Source IP/port and Destination IP/port and sequence number, so you know who's talking to the Destination machine. If you've picked a Source Address and port number not connected, then these special ACKs don't get generated, so your slow traffic generating these ACKs will not be rate limited, so you can tell this Source IP/port are not connected to this destination IP/port.

So RF5961 turns a pesky annoyance bug into a bug where its possible to determine who's connecting to a particular website (although time consuming).

With more care, they then figure out the sequence number--and once you have that, you can do targeted data injection. (It's always possible to do blind data injection, but the chance you can accurately inject javascript is hard since the sequence number is hard to predict).

RFC5961 should not do global rate limiting--it leaks important data.

Comment Re:Design by Comittee (Score 1) 243

You would have made a terrible HP executive. Maintaining and growing a profitable business gets you nowhere--no one wants to make a person like that CEO of a smaller company.

You gotta shoot for the moon and miss, and fail upwards slightly. Succeed, and you fly upwards to CEO of a large company. Most silicon valley CEOs just got lucky at some point in their life. Too bad luck doesn't repeat like skill.

And there's no way Intel would have bought PA-RISC. They needed the cover of developing something new. Basically, any deal with Intel would have gone badly for HP, and many at HP knew it. It was a short term boost (before IA64 shipped) at the expense of long-term business. Intel at the time wanted to go after the RISC server market, they wanted some of that money for themselves, by creating chips that they could sell to HP competitors to keep HP under control. If IA64 took off, it still would have gone badly for HP, in that HP would have more competitors than ever selling the same chips they were. At least IA64 now is squarely aimed at expensive servers, and HP doesn't have many IA64 competitors.

Lots of people think IA64 was just a big fiasco done by stupid people. It was a smart ploy to crush Unix servers by Intel (who would have been happy if IA64 worked as well, so win-win). On that score, it was very successful.

Comment Design by Comittee (Score 4, Interesting) 243

IA64 started as an HP Labs project to be a new instruction set to replace HP's PA-RISC. VLIW has a hot topic around 1995. HP Labs was always proposing stuff and the development groups (those making chips/systems) ignored it, but for some reason this had legs.

The HP executive culture is: HP hired mid-level executives from outside. They would then do something big to get a bigger job in another company. A lot of HP's poor decisions in the last 20 years can be directly traced to this culture. And there was no downside--if you failed, you'd go to an equivalent job at another company to try again.

So enterprising HP executives turned HP's VLIW project into a partnership with Intel, and in return HP got access to Intel's fabs. This was not done for technical reasons. Intel wanted a 64-bit architecture with patents to lock out AMD, and would never buy PA-RISC. So it had to be new. HP was behind the CPU performance curve by 1995 due to its own internal fab not keeping up with the industry due to HP not wanting to spend money. So HP could save billions in fab costs if Intel would fab HP's PA-RISC CPU chips until IA64 took off. So, for these non-technical reasons, IA64 was born, and enough executives at both companies became committed enough to guarantee it would ship.

For a while, this worked well for HP. The HP CPUs went from 360MHz to 550MHz in one generation, then pretty quickly up to 750MHz. I thought IA64 would be canceled many times, but then it became clear that Intel was fully committed, and they did get Merced out the door only 2 years late. IA64 was a power struggle inside Intel, with the IA64 group trying to wrest control from the x86 group. That's where the "IA64 will replace x86" was coming from--but even inside Intel many people knew that was unlikely. Large companies easily can do two things at once--try something, but have a backup plan in case it doesn't work.

But IA64 as an architecture is a huge mess. It became full of every performance idea anyone ever had. This just meant there was a lot of complexity to get right, and many of the first implementations made poor implementation choices. It was a bad time for a new architecture--designed for performance, IA64 missed out on the power wall about to hit the industry hard. It also bet too heavily on compiler technology, which again all the engineers knew would be a problem. But see the above non-technical reasons--IA64 was going to happen, performance features had to be put in to make it crush the competition and be successful. The powerpoint presentations looked impressive. It didn't work out--performance features ended up lowering the clock speed and delaying the projects, and hurting overall performance.

Comment Ponzi scheme clawback (Score 0) 239

My opinion is Bitcoin is a Ponzi Scheme. It's a complex one, so that it's not obvious. It's not even intended to be one, but that doesn't matter.

Bitcoin will at some time reach the point where it makes no financial sense to mine Bitcoins at all (energy required will greatly exceed the mining return even for ASICs). Then, mining will collapse, but not stop because some people will still do it, such as botnets. As soon as Bitcoin mining rate becomes 50% of it's peak, Bitcoin is in trouble. Now Bitcoin become vulnerable to fraud since it's only secure if enough independent people are mining. So someone with lots of ASICs will attempt to grab all the Bitcoins by monopolizing the mining operation and putting in fake transactions to grab all the Bitcoins, and putting in enough mining power to dominate the network and push the fraudulent transactions. This will be hard, maybe even impossible to reverse, especially if it's done well. Then Bitcoin will crash. I'd love it if someone could estimate when this might happen--I'm not interested in Bitcoin enough to collect the data.

Then everyone will scream, and call it a Ponzi scheme. And it will then appear to have been one.

And Bernie Madoff has shown us that clawback has no statute of limitations--anyone who's ever taken any profit out of Bitcoin will be sued by the losers for all the profit. So then no one will have made any money. And you'll be dealing with courts. Anyone who made money with Bitcoin will lose all that they made, plus lawyer's fees. Even if it's 10 years from now.

Only lawyers will do OK out of this. Sigh.

Comment Re:The problem with the Lisa (Score 1) 171

The original 68000 CPUs couldn't take TLB miss exceptions properly--some of the instructions would partly execute when you tried to take an exception. So you couldn't really do virtual memory properly with them, even with external logic (which people tried to do). The 68020, which came much later, had a built-in MMU.

Comment The replies here are disappointing (Score 1) 540

Here's my summary of skimming through this mess: Krugman is a hack and a liar who is always wrong. He's predicting computers will make no further improvements, so that is clearly wrong. Computers are just getting started.

Hmm. I don't know what it is about Paul Krugman that makes people so rabid, but Krugman is actually arguing that the computer revolution is just getting started (against Gordon, who's arguing the opposite). So if the Krugman haters are sure he's wrong about everything, then the logical conclusion is: computers are finished.

I'm sure everyone here basically agrees with Krugman that the computer revolution is not over. Computers will automate more and more things. This flamefest was just pointless.

The much more interesting econo-blog discussion is: if robots can replace humans, and robots can make more robots, then it appears the Luddites may turn out to be right 200 years later: wages will fall. This hasn't happened yet, but outsourcing gives us a partial taste of what this looks like. The interesting question is, what to do about this? Note that taxing robot labor the same way human labor is taxed helps address this issue, but how do you tax robot wages when they aren't paid? And the really interesting question: has this revolution partially begun and is it behind the increasing inequality in advanced countries?

Comment After a WHOLE week? (Score 4, Informative) 674

What company has to close if workers are on strike for a WHOLE WEEK? The company doesn't have to pay hourly workers who don't show up to work...

This looks to me like a corporate version of "suicide by cop"--run your company into the ground (6 CEOs in 10 years, many executives getting big raises, company owned by hedge funds and venture capitalists, company has big debt), and then keep cutting workers pay until they have to say "enough". And then blame the unions.

If you're a company, which is failing and cannot be saved, and you have union workers, how else do you expect the company to finally close up shop? This is what it looks like--try to blame the unions.

The union says they already had half their members laid off, have already cut their pay to below industry average, etc. The union website before the strike started said the following (see http://bctgm.org/PDFs/HostessFactSheet.pdf):

Hostess is not and will not be viable: If Hostess emerges from bankruptcy under its present plan,
it will still have too much debt, too high costs and not enough access to cash to stay in business for
the long term. It will not be able to invest in its plants, in new products and in new technology.


I hope someone buys Drake's.

Comment Re:Actually I think it's SRAM... (Score 5, Interesting) 178

The WPI report confirms what most everyone suspects: Reading from an uninitialized SRAM returns mostly noise, about 50/50 (but not exactly) 1's and 0's, and highly dependent on temperature. I think what they're saying is something like "Look at uninitialized memory, whose values are apparently random 1's and 0's, and somehow compute a unique fingerprint that is stable for this device, but different from all other devices". I'm not sure that's actually possible. I can't think of anything on chips that would produce "random"-looking data and which wasn't highly temperature dependent.

Even if a clever algorithm could "fingerprint" an SRAM device, others have already pointed out all the ways to break this. It's simply a slightly more complex MAC address, and will likely be easy to effectively clone. It's like printing a password on paper in special red ink that only you have, and then saying no one can log in to your system (by typing the password) since they can't replicate that red ink. Umm, the special red ink is a red herring. All you need is the password.

I don't think there's really anything here. There's no details at the PUFFIN site.

Comment Re:Sorry Bruce, but that is total nonsense. (Score 2) 403

Intel in the 90's was performance at any power cost. Then in the last 10 years, it was performance within a limited power envelope, aiming at laptops and desktops. The power they were aiming at was much higher than smartphones, so although they got more "power efficient", you do very different things when aiming at 1W than when aiming at 10W or 100W. If you can waste 5W and get 20% more performance, that's a great thing to do. But not for phones.

I think what you're seeing is Atom was a kludge. If Intel chooses to aim directly at the 1W market, then you'll see there really is no "CISC" overhead.

The ARM Cortex-A9 is comparable in performance per MHz with the Pentium II of the mid-90's. That's because ARM is very sensitive to power, not to performance, so they're not throwing in everything that high-performance CPUs have. Intel is coming at the market from the other end--high performance chips they're trying to trim down to use less power. And they've not executed that well yet. Just look at the Atom--it has a FSB, meaning the memory is attached to a different chip. Lots of wasted power. Umm...ARM chips used in phones have the memory in the same PACKAGE now (stacked die).

Note that ARM has something analogous to the CISC decoder since it has 2 instruction sets it runs (Thumb and ARM). It's not as complex as the decoder needed for x86, though.

Comment My explanation of article (Score 5, Informative) 172

The blog post was a bit terse, but I gather one of the main problems is the following:

Google lets users upload profile photos. So when anyone views that user's page, they will see that photo. But, malicious users were making their photos files contain Javascript/Java/Flash/HTML code. Browsers (I think it's always IE) are very lax and will try to interpret files how they please, regardless of what the web page says. So, webpage says it's pointing to a IMG, but some browsers will interpret it as Javascript/Java/Flash/HTML anyway once they look at the file. So now a malicious user can serve up scripts that seem to be coming from Google.com, and so they are given a lot of access at Google.com and break their security (e.g., let you look at other people's private files).

Their solution: user images are hosted at googleusercontent.com. Now, if a malicious user tries to put a script in there, it will only have the privileges of a script run from that domain--which is no privileges at all. Note this just protects Google's security...you're still running some other user's malicious script. Not google's problem.

The article then discusses how trying to sanitize images can never work, since valid images can appear to have HTML/whatever in them, and their own internal team worked out how to get HTML to appear in images even after image manipulation was done.

Shorter summary: Browsers suck.

Comment Why? (Score 1) 78

The article said Nintendo Power has over 400,000 print subscribers. How could they not make a go of this? What did they need from Nintendo, anyway, other than early access to games to review them? I get Nintendo Power currently since I can let me kids read it and not have to explain, again, why they can't play M rated games.

I suspect the threat of shutdown is part of a ploy by the publisher to get something from Nintendo (which was hinted at in the article). If the shutdown actually happens, then the publisher is stupid to throw away several million dollars/year in subscriptions.

Comment Re:Friends (Score 4, Insightful) 948

"A free market fixes everything" is nonsense. Imagine no rules/laws/regulations. Perfectly free market. To win, I'll murder my competition, and get away with it (until they murder me). There are no laws. It's free and fair, brutal and ugly.

OK, so we make murder illegal. And kidnapping, extortion, blackmail, etc. It's no longer a free market. But I don't think anyone minds.

But already, government can be corrupted. A sheriff that aggressively investigates crimes against my competitors while ignoring my crimes gives me an advantage. And this is just serious crimes.

The point is not to get government out of the way, it's to make government enforce fairness (you are right about that). And "less government" is not really the way to do this. I don't want a perfectly free market. If you take econ101, you'll see many ways businesses could screw over consumers with asymmetric info, monopolies, fraud, etc. And I want regulations to eliminate toxins in food, unreasonably dangerous products, etc. And I don't want to drink polluted water.

Solyndra is no big deal--they expected a percentage of businesses the government backed to not succeed, and Solyndra was in that percentage. If there's corruption involved, then I'd be mad, but I haven't heard of any yet. I'm glad the US government invested in the Internet.

Slashdot Top Deals

The means-and-ends moralists, or non-doers, always end up on their ends without any means. -- Saul Alinsky